-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The memory utilization is increasing in Istanbul BFT #481
Comments
@hagishun could you give me some details of the cluster: whats the configuration like and what hardware was used? Also, are you getting same issued when building quorum client from master? |
@tharun-allu thanks for the metrics. Whats the load on the chain and how far did it advance blockwise? |
@fixanoid the above graphs are from a network of nodes with 16G memory each and block height is 4 million. I have restarted geth process in a different network yesterday with 7 million blocks and attached is the memory graph of the nodes (there are 4 in the network What I noticed is 2 of the nodes shot up their memory and one node died and other will soon die. |
Hi @tharun-allu I've been trying to replicate this issue but unable to. Did you get this only after it reached 4 million blocks? From the graph attached it seems quite stable for sometime before going up - was there any incident observed in the log? |
My suspicion is that the more transactions that go through the network the faster the growth is. I currently run 3 sets of networks and the rate of growth seems to correlate with how busy (transactions) the network is. If you want me to give you any additional data or collect any new metrics I can do that and post here. Also to reduce confusion, I can only post data from only one environment. Let me know your thoughts |
@tharun-allu thank you for the info. I've put up a change for this - https://github.com/namtruong/quorum/tree/bugfix/istanbul-storeBacklog-memory-leak Many thanks! |
@namtruong I updated my dev network with the code from branch and I will keep you updated how it goes today.
|
@tharun-allu have you got any update? |
Looks like this has resolved this issue. I will confirm by downloading the 2.1.0 from upstream and seeing whether the issue come back. I upgraded from 2.0.1 to 2.1.0 from @namtruong branch. |
@tharun-allu thanks for your update. Fyi, me and my colleague were also working on a different patch here https://github.com/jpmorganchase/quorum/compare/master...trung:f-istanbul-backlogs?expand=1 . We're in the process of testing the changes - but they are ultimately solving the same issue. Please feel free to test either of the solutions and let us know your feedback |
@tharun-allu the new pull that addresses the issue is here: #521 |
System information
ENV1
Version: 1.7.2-stable
Git Commit: 4a77480
Quorum Version: 2.0.2
Architecture: amd64
Network Id: 1
Go Version: go1.7.3
Operating System: linux
GOPATH=
GOROOT=/usr/local/go
ENV2
Version: 1.7.2-stable
Git Commit: fd0e3b9
Quorum Version: 2.0.2
Architecture: amd64
Network Id: 1
Go Version: go1.7.3
Operating System: linux
GOPATH=
GOROOT=/usr/local/go
Expected behaviour
The memory utilization is periodically released.
Actual behaviour
Mamory utilization is increasing.
Only one node is periodically released.
Steps to reproduce the behaviour
Evnr
Backtrace
log.zip
The text was updated successfully, but these errors were encountered: