-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
nil:nilclass error connecting to opensearch #42
Comments
Please try the following and report back your results... get rid of 'type_name' entry either: |
@ryn9 removed
|
It seems its an issue related to fluent-plugin-opensearch latest version, reverted to 1.0.2 fixed my issue. |
@cosmo0920 - perhaps a bug in 1.0.3 ? |
@cosmo0920 - I hit the same bug when using 1.0.3 |
Sure, I'll take a look. |
Could you paste around the error? Ruby is script language. |
Here is a 1.0.3 error
As compared to same config and same test messages being sent through 1.0.2 |
@cosmo0920 @ryn9 did you guys also see this warning message in 1.0.2 gem: any idea what can cause this error? |
@LeonD9 Could you paste more around error? Ruby's backtrace is also important to digging error reason. |
@cosmo0920 - I ran this again - tracing - and it looks like this occurs in the bulk error condition. If I send a payload with no errors - it appears to work okay. When I send a payload with an error - it errors like above. Here is a slightly modified trace:
|
@cosmo0920 the same as this issue: fluent/fluentd#1973
|
@cosmo0920 anything more we can provide in order to get this resolved? |
Ordinary records will not cause this issue. Could you provide a sample record that makes this error? |
I've encountered this issue too. Interestingly, I never observed the problem before adding filters to retag records with JSON in the
Error output from fluentd:
I can't tell what record(s) are causing the problem, but when I switch the
NOTE that, whilst I can't tell if this is the offending record, I've selected it as I've noticed the json in the Whether this invalid json is related to this issue or not, I don't know. |
Running with trace logs I've found
The related record for the bulk insert appears to be:
The JSON looks valid as far as I can tell, so my previous comment may be a red herring. hth |
Thanks! Possible candidates of reproducible records should be very helpful for us. I'll dig into the root cause of this issue. |
@cosmo0920 I can reproduce it, I think it is happening if the bulk request gets error 400 from the OpenSearch, for example, because of datatype mismatch #53. I am getting this error after I upgraded to the latest version. Before the error was very clear when I have downgraded to |
Same issue with 1.0.4, it seems to also not flush the items from the buffer, which leads to a buffer overrun and then total system halt (i.e. no logs at all) Downgrade to 1.0.2 solved it. But frankly that's not really a solution. |
I'm not sure why |
Steps to replicate
Opensearch out buffer config:
I receive the following error when trying to send logs to opensearch:
what can be the reason for this error?
Expected Behavior or What you need to ask
Access is opened to opensearch and logs arrive from time to time but i keep receiving this error most of the time, what can be the reason for this error?
Using Fluentd and OpenSearch plugin versions
kubernetes
fluentd --version
ortd-agent --version
:v1.14.3-debian-elasticsearch7-1.0
fluent-gem list
,td-agent-gem list
or your Gemfile.lock:The text was updated successfully, but these errors were encountered: