-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expose additional statistics about validators in gaia-light #1506
Comments
Hey! Ryan from Figment here 👋 We currently use the RPC server (and For reference on:
We don't necessarily need to store every block locally, and we currently store very light objects compared to what is in the RPC data. We just need very recent ones so that we can do things like build hourly uptime snapshots and things like that. The rest of the stuff we calculate happens right after syncing, and we could easily discard the blocks immediately. We could even architect it so that we don't store them and simply query them from RPC as needed (there is 20 block page limit, so this is kinda cumbersome -- it sure would be nice to be able to configure this somehow so we can have our local node for this application allow unlimited/huge number of blocks per call). This issue seems to mostly be about validator statistics, and so here's a rough list of high level stuff we want to know (and currently calculate via
I'm not sure how gaia-light can provide some of this in a real-time way that would be useful. It's not really appealing to be hammering the API every second to try and keep up with voting power changes. We experimented with using the With the block-by-block approach we just consider all the new blocks and record/do the right things, then wait a bit and do it again for new blocks. If we mess up, we just reprocess those blocks. It feels pretty elegant. With pruning of old blocks we no longer need, it doesn't feel clunky at all. What would be helpful for us to build more would be:
I hope this is helpful feedback on this issue. Let me know here or ping me on Riot if I can elaborate or clarify anything here. |
Brain dump:
|
Seems pretty good to me. One thing I'm not sure I see here is info on validators being revoked/unrevoked (unjail?). That would be useful for us since currently, as I understand it, we would have to (we don't handle it at all) more or less guess/assume a validator was revoked by its voting power going to 0. |
@rfunduk you can use the |
Unfortunately no I don't think so. I don't think that endpoint takes a block height? We need/desire our system to be resilient to downtime/missed syncings etc so we effectively need our database to be recreatable at any time from scratch using rpc/lcd. |
I see. I believe the CLI allows for this. I guess the LCD endpoints should be more flexible in their query functionality. |
Yea we can't call CLI because the node isn't even on the same instance (we sync various chains/networks and our web-related servers are separate). |
cc @fedekunze thoughts on adding this information to other endpoints? |
@rfunduk thanks for the feedback. There's actually an open issue for that (#2202). I will sync with @jackzampolin to increase the priority of that issue once we merge #2249 |
Moving what's left to #2202. Please open a separate issue if someone feels that we're missing something |
When building explorers there is a bunch of additional data that Tendermint core tracks that would be useful to expose to clients. Currently if you want to track uptime for validators, you need to parse through whole chain and store data for each block. This is how
figment.network
is providing that extra data. They have built a separate database that tracks this.It would be nice if
tendermint
,gaiad
orgaia-light
exposed this data to third party clients. It would make explorers significantly easier to build as well as give them more features. Ideally we want to expose this in ICS0 (TendermintAPI
), since this will be common across all tendermint based chains whether PoS or PoA.cc @nylira @adrianbrink @ebuchman
The text was updated successfully, but these errors were encountered: