-
Notifications
You must be signed in to change notification settings - Fork 13k
Distro全量初始化占用内存过高触发OutOfMemoryError导致无法启动 #8072
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Labels
kind/enhancement
Category issues or prs related to enhancement.
Milestone
Comments
@ i will solve it @ |
liqipeng
added a commit
to liqipeng/nacos
that referenced
this issue
Apr 1, 2022
… reduce memory cost, especially when DistroProtocol fetching full data from peer node in the initialization phase. (alibaba#8072)
Merged
5 tasks
这个PR通过了,是否需要在develop分支上同步修改一下 |
@KomachiSion 如果同时也合并到 develop 是需要另外提交 PR,还是有什么流程吗?
|
另外提交一个PR到develop分支就行了 |
ok 我找时间提一下 |
liqipeng
added a commit
to liqipeng/nacos
that referenced
this issue
Apr 7, 2022
… reduce memory cost, especially when DistroProtocol fetching full data from peer node in the initialization phase. (alibaba#8072)
liqipeng
added a commit
to liqipeng/nacos
that referenced
this issue
Apr 9, 2022
…`(for v2.x) branch to `v1.x-develop`. (alibaba#8072)
liqipeng
added a commit
to liqipeng/nacos
that referenced
this issue
Apr 9, 2022
…`(for v2.x) branch to `v1.x-develop`. (alibaba#8072)
CherishCai
pushed a commit
to CherishCai/nacos
that referenced
this issue
Apr 12, 2022
… reduce memory cost, especially when DistroProtocol fetching full data from peer node in the initialization phase. (alibaba#8072) (alibaba#8075)
CherishCai
pushed a commit
to CherishCai/nacos
that referenced
this issue
Apr 12, 2022
…`(for v2.x) branch to `v1.x-develop`. (alibaba#8072) (alibaba#8119)
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Describe the bug
问题复现
环境:节点配置堆大小为:-Xms6g -Xmx6g
nacos版本: 1.4.3
操作:在对naming进行测试时,往naming里写入了60万实例后,在对某个节点重启时,节点无法启动,查日志发现被重启节点一直无法成功调用 /v1/ns/distro/datums 接口拉取数据,手动调用此接口发现接口返回了 java.lang.OutOfMemoryError: Java heap space;caused: Java heap space 信息
分析:
定位到问题出在此序列化方法上
通过实验构造40万实例(元数据较大)通过工具看40万实例占用内存320MB,
mapper.writeValueAsString(obj)得到字符串占用内存616MB,
而接着调用String.getBytes转为byte[]:
java.lang.String#getBytes(java.nio.charset.Charset)
→ java.lang.StringCoding#encode(java.nio.charset.Charset, char[], int, int)
→ sun.nio.cs.UTF_8.Encoder#encode // 处理后得到一个 byte[] 长度为969908091,925MB
→ java.lang.StringCoding#safeTrim(byte[], int, java.nio.charset.Charset, boolean)
→ java.util.Arrays#copyOf(byte[], int) // 再次copy得到 byte[] 长度为 323302697,308MB
共计产生临时对象 616 MB + 925MB + 308MB = 1849MB
所以 toJsonBytes 效率偏低。
改进办法:
在不改变现有机制的情况下,使用 Jackson 的mapper.writeValueAsBytes 可以优化问题,大概可以减少1GB以上临时对象(此为实验情况下减小的量。实际取决于真实实例数及实例大小)
The text was updated successfully, but these errors were encountered: