You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When launching a new pod the beats-exporter container will shut down with the following error message
2022-10-12 09:30:48,767 - __main__ - ERROR - Error connecting Beat at port 5066:
HTTPConnectionPool(host='localhost', port=5066): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f149d86b0d0>: Failed to establish a new connection: [Errno 111] Connection refused'))
However, after a couple of restarts (usually no more than 2) the pod will be alive and ready. I assume this is the filebeat container not being ready fast enough for the exporter to exhaust it's retries
is there some k8s way to handle this? i can add a delay by overriding the container CMD but it looks like kind of a hack for me
maybe we could set a startup delay using arguments? or increase interval between retries?
The text was updated successfully, but these errors were encountered:
@OranShuster this is probably way too late, but anyways:
If you are running this in k8s, you could add a startupProbe to the exporter container which checks if the filebeat http server has started / is answering requests
sleep 10, the poor man's startupProbe 😅
Are you currently still using the beat exporter? If so, is it still working?
(just wondering because the last commit is 4 years old)
When launching a new pod the beats-exporter container will shut down with the following error message
However, after a couple of restarts (usually no more than 2) the pod will be alive and ready. I assume this is the filebeat container not being ready fast enough for the exporter to exhaust it's retries
is there some k8s way to handle this? i can add a delay by overriding the container CMD but it looks like kind of a hack for me
maybe we could set a startup delay using arguments? or increase interval between retries?
The text was updated successfully, but these errors were encountered: