-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak #3401
Comments
It seems related to #3342. FYI: Recently, fluent-plugin-systemd 1.0.5 was released. It fixes "Plug a memory leaks on every reload" Does it reproducible even though fluent-plugin-systemd 1.0.5? |
I tested with the complete config and then I still see the memory leak. Today I will run with only the systemd part to see if there is a change. |
I'm investigating similar issue from few days, and observed that for fluentd |
Ah, that is good information. Please share the results with the latest fluentd version, if you can. Is it possible to share the configuration you are using? |
This is my actual config
|
Yes, also with fluent-plugin-systemd version 1.0.5 I see the same behaviour. I can attach a new graph, but it shows exactly the same as the first one. |
Thanks to @sumo-drosiek, I can confirm that no memory leak occurs when I do not use a buffer section in the output. I rerun everything using a new config with a buffer section to be 100% sure. The config without buffers
|
I've confirmed the issue. Details:
diff --git a/lib/fluent/plugin/in_systemd.rb b/lib/fluent/plugin/in_systemd.rb
index adb8b3f..66f4373 100644
--- a/lib/fluent/plugin/in_systemd.rb
+++ b/lib/fluent/plugin/in_systemd.rb
@@ -120,6 +120,7 @@ module Fluent
init_journal if @journal.wait(0) == :invalidate
watch do |entry|
emit(entry)
+ GC.start
end
end |
@ashie I just wanted to add the results of my test run with buffering configured in an output where I see that without buffering there is no memory leak and with buffering a memory leak occurs |
So to be complete: No memory leak:
Memory leak:
|
Thanks, further investigation may be needed at fluentd side. |
This issue has been automatically marked as stale because it has been open 90 days with no activity. Remove stale label or comment or this issue will be closed in 30 days |
@ashie any update on this? |
I am also facing this issue. Support has suggested to setup a CRON to restart service until a resolution comes: https://stackoverflow.com/questions/69161295/gcp-monitoring-agent-increasing-memory-usage-continuously. |
This issue has been automatically marked as stale because it has been open 90 days with no activity. Remove stale label or comment or this issue will be closed in 30 days |
The issue is still here. Please unstall . |
Can it be connected with #3634? |
Any update for this? |
for me it was a salvation in setting
for elasticsearch match section. And the aggregator pod is using all the ram which you give to the limit until ram release the ram.
This helped to make this usable, but I still have no idea why it is that greedy with the ram (by the way, forwarders are acting normal) ofc I had to use manually set elasticsearch version to 7.17 I wasted tons of time on it and I'll be happy if some hints from these thoughts would be useful |
When you use jemalloc, |
I am also seeing a memory leak. Here is our Gemfile.lock and here is a 30 day graph showing the memory leak over several daemonset restarts. Each distinct color is one instance of fluentd servicing a single node: |
After upgrading fluentd v1.7.4 to v1.14.6, the memory usage has been spiked from 400~500Mi to about 4Gi. And it trend keep growing slowly but infinitely day by day, even though there is NO configuration and workload changes. The following graph was regarding several hours. Using plugin list:
Is there any updates to affect memory usage between the both versions ? |
@Mosibi |
@madanrishi do you have a link to the kubernetes issue that you think this might be? |
Any update on this issue guys? I am facing a similar issue |
|
Hi, We have a memory leak in the following situation:
Changing the log_level to not log non-matching pattern messages fixes the memory leak (e.g. changing log_level to error) Environment:
|
Facing the same issue, observed on cloudwatch_logs, log_analytics and logzio-0.0.22 plugins, slack plugin, memory consumption of each process keeps rising from hundred megabytes to several gigabytes in a few days. td-agent 4.4.1 fluentd 1.13.3 (c328422) |
My colleague Lester and I found out an issue related to this memory issue: |
Describe the bug
I noticed a memory issue, what looks to me as a leak. I had a more complex configuration, but boiled it down to this simple config just to rule out the specifics in my config. Even with this (see below) config, I observe the memory leakage.
I run fluentd 1.12.3 from a container on a Kubernetes cluster via Helm
To Reproduce
Deploy Fluentd 1.12.3 using Helm on a Kubernetes cluster using the provided config. Let it run for some time and observe the memory metrics
Expected behavior
No memory leak
Your Environment
Fluentd 1.12.3 on Kubernetes 1.17.7
Your Configuration
Additional context
Using my more complex configuration I observe the same memory growth but there seems to be a correlation between the growth in size and the number of logs ingested. In other words, using more inputs (in my case, container logs), the (assumed) memory leak is larger.
The text was updated successfully, but these errors were encountered: