-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
One TiKV instance keeps OOM when using "lightning tidb backend" to import 2T data #22964
Comments
/severity critical |
What's your lightning version? |
v5.0.0-rc. |
It can be reproduced with following steps:
|
With jeprof we get an allocation map: a.svg.zip. We can see that besides 4GB block cache, about 5GB memory is allocated in In this case there are about 40k entries per TiKV instance, so |
It's not true that The relation code is peers = fetch_fsm(); // 256 at max
for fsm in peers {
fsm.handle_normal(); // will generate a ready for every fsm
}
peers.end(); // will handle all ready, send committed entries into apply workers. So if there are about |
I suggest to merge and introduce this feature to 5.0 to avoid the OOM: tikv/raft-rs#356. |
tikv/raft-rs#356 may not solve the problem completely, because it only limits the number of committed entries within each |
Fixed by tikv/tikv#10334. |
Please check whether the issue should be labeled with 'affects-x.y' or 'fixes-x.y.z', and then remove 'needs-more-info' label. |
Bug Report
Please answer these questions before submitting your issue. Thanks!
1. Minimal reproduce step (Required)
2. What did you expect to see? (Required)
No OOM.
3. What did you see instead (Required)
One TiKV instance OOM again and again.
4. What is your TiDB version? (Required)
lightning: v5.0.0-rc
kv
db
The text was updated successfully, but these errors were encountered: