Replies: 1 comment 4 replies
-
Are there any crashes/exceptions in the log? To get more details about the memory type beam is using could try GET-ing
The Check your Some of flags
Other flags I've seen successfully used in production on bare-metal nodes: Erlang 22 and 3.1.2 are quite a bit older. There have been some memory leak fixes in Erlang VMs since, if possible as a test try upgrading to the latest Erlang 24 patch version. CouchDB 3.1.x is also quite old and I think has a number of CVEs against it. See if it's possible to upgrade to 3.3.3. I am not familiar with AWS's offering so not sure what |
Beta Was this translation helpful? Give feedback.
-
Hi all. I am working on a scaled deployment of 3.1.2 Couchdb hosted on 60x AWS m7i.2xlarge (32GB RAM, 8 core) instances, and am struggling with very high memory use on all of them. Each scaled couch only supports 100 stores with a master (15MB), 2x cluster (12MB) and a customer (13MB) db, all of which contain ~900 documents. So in total each scaled instance is hosting ~5.2GB of data, and yet on all instances disk usage is over 22GB, and RAM used (after making updates to multiple docs without rebooting) is over 27GB, sometimes reaching the 32GB limit and falling over with OOM error (all of which consumed by beam.smp). Could someone please explain why this is, and how I can go about reducing the memory use in a significant way? I've tried bumping the Erlang version to 22.3.4.9, as it was pinned at 20.3.8.26 previously (and I had similar issues with Vernemq which were largely solved with a Erlang bump) but it hasn't made any difference. I also looked through various config setting but couldn't spot anything off with our current config. If you could point out what I've missed that would be a huge help. Thanks in advance.
ERL_FLAGS="-smp enabled +a 64 +Q 32768 +A 16 +SP 100:100"
Beta Was this translation helpful? Give feedback.
All reactions