-
-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JavaScript heap out of memory during import #259
Comments
Just doing some research, and it would seem there is an node env variable which can be set i.e.
See here for details > https://www.npmjs.com/package/increase-memory-limit Do you think this is worth a try? |
Hi @kierangirvan! Was there any other log message Starting KPI file streaming and saving to db? There must be a memory leak somewhere, I suppose, although the overall design is to process the file in chunks. So I would like to know whether it failed during parsing/saving the data into the DB or during processing. Also, did you consider streaming the data into the app while your test is running? That would reduce the amount of time spent on parsing the data significantly. |
Thanks for your quick response. It does suggest it is attempting to save to the DB, this step takes for ages usually, but we've come to live with that, so the previous log entry before it runs out of memory is:
We are using Taurus entirely for our test design i.e. we do not dip into jmx, its entirely yaml based, and I believe it is not possible to enable the backend listener with Taurus via yaml, you have to convert the whole scenario to jmx to achieve this which we really don't want to do. |
If it is possible to anonymize your .jtl file and share it with me, this way I could have a look and check if I would be able to spot the issue.
|
One more note to the 2) actually it seems to be possible if you do custom jmeter installation / copy the plugin into plugins folder? https://gettaurus.org/docs/JMeter/#JMeter-Location-Auto-Installation |
Thanks @ludeknovy I think the issue boils down to our ability to call the backend listener within the yaml itself. We have purposely built everything in yaml (and not jmx), and I do not believe there is a way to call the backend listener within the taurus yaml configuration. Regarding the out of memory issue itself - we have included the following node heap configuration and have now successfully uploaded a 1.1GB kpi file. We will run a few more uploads to be certain, but that seems to have done the trick.
I will close this issue once we have successfully uploaded a few more tests in the next few days. |
|
|
Hi @milan-panik ! |
It happens only during peak hours, i.e. when the batch of tests ends at the same time and lot of reports are being uploaded. |
@milan-panik |
@milan-panik Thanks! I see in your config increased value for |
Do you have enabled the option to delete samples after a report is generated? |
I've removed a vacuum query after samples purge—it was a way too heavy operation. By default, it's handled by autovacuum anyway. So if you had |
Thank you Ludek. I'm bit lost though, has it already been released? bc I've checked releases and related changelog, and cannot find it |
@milan-panik it was not released yet, but it's available in |
@kierangirvan I've pushed a possible fix, but I would appreciate if you could test it and let me know. |
Thanks @ludeknovy I'll get the latest build pushed out in the coming week and confirm if this has helped. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
I've found a memory leak. I've prepared a fix for it that will release the memory. But I need to change the whole solution - so the high memory usage would not even be there. However, that won't be possible without changing the DB docker image, as it needs to include the timescale toolkit - it will take some time to prepare the image as the HA version does not support ARM. |
I've prepared new docker images for the project: https://hub.docker.com/r/novyl/jtl-reporter-db |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Describe the bug
Whilst attempting to upload a large jtl file (1.1GB), the upload seems to work, but when the file is being processed (yellow icon in test report view), it never completes and an exception is thrown in the be to suggest we've run out of memory.
To Reproduce
Attempt to upload 1.1GB jtl file.
Expected behavior
The tesy results should eventually become visible in the jtlreporter fe
Screenshots
We are running this in AWS ECS on a fargate task, you can see that from 17:12 onwards the kpi file is being processed:
Almost 90minutes later, the following exception is thrown by the be container:
The container is then marked unhealthy and is replaced by a new container. From what I can see we are not running hot on either CPU/memory on the task itself:
So I assume we need to set the JVM in question to have a bigger slice of the memory. Do you know what or how to set this?
The text was updated successfully, but these errors were encountered: