Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf(node/store): fine-tune Badgerdb params #465

Merged
merged 2 commits into from
Feb 22, 2022
Merged

Conversation

Wondertan
Copy link
Member

@Wondertan Wondertan commented Feb 21, 2022

Motivations for each configuration field are described in the comments.
Need to be reviewed ASAP for the release.

@Wondertan Wondertan self-assigned this Feb 21, 2022
Copy link
Member

@liamsi liamsi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not verify each option individually. But the comments help a lot (thanks!)

In general I'm in favour of optimizing everything later (and ideally backed by empirical data). That said, this looks like a no-brainer to me.

@Wondertan
Copy link
Member Author

@liamsi, I forgot to mention that I had to dive deeper into this to understand what caused the OOMs during sync, as it was caused during Badger compactions. Also, this tuning fixed the issue and reduced memory requirements a lot, while improving performance.

My droplet doesn't need more than 3Gb of RAM during compactions anymore and it consumes ~1.8GB while syncing Bridge node and with Validator running.

@Wondertan
Copy link
Member Author

Wondertan commented Feb 21, 2022

For the light node, we can go even further and make it run below 100 MB and even 50mb only. By fixing multiple insights from profiling.

@Wondertan Wondertan merged commit 9ce4c3b into main Feb 22, 2022
@Wondertan Wondertan deleted the hlib/badger-config branch February 22, 2022 11:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants