You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Geth version: geth version v1.0.7
OS & Version: Linux bsc2-rpc 5.4.106-1-pve #1 SMP PVE 5.4.106-1 (Fri, 19 Mar 2021 11:08:47 +0100) x86_64 GNU/Linux
Commit hash : none
Specs: Numerous powerful machines, e.g. 40 Threads, 128GB ram, all bells and whistles, also tested on 3rd party nodes by Quicknode.
Expected behaviour
Upon registering to contract events using web3's contract.events.EVTNAME , I expect contract log events to fire, beginning from fromBlock to the current block.
After reaching the current block, I expect further log events to get fired as they happen.
Actual behaviour
As soon as all historical logs have been fired, no further log events are happening and the entire node is massively lagging blocks, never able to catch up.
Steps to reproduce the behaviour
Apply web3's contract.events.EVTNAME to a contract of choice and listen to the event, starting from a block of your choice (does not need to be that far in the past)
.
Let a timer/interval run in parallel that frequently checks for the current block number (web3.eth.getBlockNumber()).
Once all events fired, note how the block numbers from the timer stall, never being able to catch up.
Disconnecting and reconnecting to the WS service will reset everything to the current block height and the issue above starts over.
Conclusion
Can be that you will need to setup a few subscriptions to simulate this properly.
My script opens a number of subscriptions (approx. 80 active ones) while the node is far from being overloaded.
Chances are that this issue occurs even with lower amounts of subscriptions.
At the moment the only solution until fixed is to disable websockets in order to not running into block lag issues.
As a sidenote: a couple of weeks ago this already happened and went away, I assume this issue is related to overall network load.
Starting from block ~5.5mln event log queries start to run with a complexity of O(n^2) which hints a general problem with the node implemenation (probably database issues).
Backtrace
No backtrace available as no exceptions thrown.
The text was updated successfully, but these errors were encountered:
We have responded to the question and will proceed to close the case as we didn't get any additional question after 3days. Please proceed to join our Discord channel for more discussion at https://discord.com/invite/YPDsUqcwR8
Hi there,
System information
Geth version:
geth version v1.0.7
OS & Version: Linux bsc2-rpc 5.4.106-1-pve #1 SMP PVE 5.4.106-1 (Fri, 19 Mar 2021 11:08:47 +0100) x86_64 GNU/Linux
Commit hash : none
Specs: Numerous powerful machines, e.g. 40 Threads, 128GB ram, all bells and whistles, also tested on 3rd party nodes by Quicknode.
Expected behaviour
Upon registering to contract events using web3's contract.events.EVTNAME , I expect contract log events to fire, beginning from fromBlock to the current block.
After reaching the current block, I expect further log events to get fired as they happen.
Actual behaviour
As soon as all historical logs have been fired, no further log events are happening and the entire node is massively lagging blocks, never able to catch up.
Steps to reproduce the behaviour
Apply web3's contract.events.EVTNAME to a contract of choice and listen to the event, starting from a block of your choice (does not need to be that far in the past)
.
Let a timer/interval run in parallel that frequently checks for the current block number (web3.eth.getBlockNumber()).
Once all events fired, note how the block numbers from the timer stall, never being able to catch up.
Disconnecting and reconnecting to the WS service will reset everything to the current block height and the issue above starts over.
Conclusion
Can be that you will need to setup a few subscriptions to simulate this properly.
My script opens a number of subscriptions (approx. 80 active ones) while the node is far from being overloaded.
Chances are that this issue occurs even with lower amounts of subscriptions.
At the moment the only solution until fixed is to disable websockets in order to not running into block lag issues.
As a sidenote: a couple of weeks ago this already happened and went away, I assume this issue is related to overall network load.
Starting from block ~5.5mln event log queries start to run with a complexity of O(n^2) which hints a general problem with the node implemenation (probably database issues).
Backtrace
No backtrace available as no exceptions thrown.
The text was updated successfully, but these errors were encountered: