Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Milliseconds are stripped from logs #15

Closed
chris-codaio opened this issue Feb 26, 2016 · 8 comments
Closed

Milliseconds are stripped from logs #15

chris-codaio opened this issue Feb 26, 2016 · 8 comments

Comments

@chris-codaio
Copy link

It appears this plugin strips milliseconds from log entries, making logs appear in searches (e.g., via the kibana UI) in the wrong order.

@jimmidyson
Copy link
Contributor

What version of fluentd are you using? Sub second precision is only supported in fluentd v0.14.0.pre.1 or builds from fluentd master branch.

@chris-codaio
Copy link
Author

It looks like td-agent --version returns 0.12.19. However, this appears to be successfully parsing sub-second timestamps for logs coming from our kibana frontend...

{"@timestamp":"2016-02-29T19:09:09.227Z","level":"info","message":"POST /_msearch?timeout=0&ignore_unavailable=true&preference=1456772940585 200 - 7261ms","node_env":"production","request":{"method":"POST","url":"/elasticsearch/_msearch?timeout=0&ignore_unavailable=true&preference=1456772940585","headers":{"host":"[redacted]","accept":"application/json, text/plain, */*","accept-encoding":"gzip, deflate","accept-language":"en-US,en;q=0.8","content-type":"application/json;charset=UTF-8","cookie":"_ga=GA1.2.1897976878.1445274807","origin":"[redacted]","referer":"[redacted]","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.116 Safari/537.36","x-forwarded-for":"209.210.153.235","x-forwarded-port":"80","x-forwarded-proto":"http","content-length":"662","connection":"keep-alive"},"remoteAddress":"10.0.55.0","remotePort":33084},"response":{"statusCode":200,"responseTime":7261,"contentLength":331899},"log":"{\"@timestamp\":\"2016-02-29T19:09:09.227Z\",\"level\":\"info\",\"message\":\"POST /_msearch?timeout=0&ignore_unavailable=true&preference=1456772940585 200 - 7261ms\",\"node_env\":\"production\",\"request\":{\"method\":\"POST\",\"url\":\"/elasticsearch/_msearch?timeout=0&ignore_unavailable=true&preference=1456772940585\",\"headers\":{\"host\":\"[redacted]\",\"accept\":\"application/json, text/plain, */*\",\"accept-encoding\":\"gzip, deflate\",\"accept-language\":\"en-US,en;q=0.8\",\"content-type\":\"application/json;charset=UTF-8\",\"cookie\":\"_ga=GA1.2.1897976878.1445274807\",\"origin\":\"[redacted]\",\"referer\":\"[redacted]\",\"user-agent\":\"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.116 Safari/537.36\",\"x-forwarded-for\":\"[redacted]\",\"x-forwarded-port\":\"80\",\"x-forwarded-proto\":\"http\",\"content-length\":\"662\",\"connection\":\"keep-alive\"},\"remoteAddress\":\"10.0.55.0\",\"remotePort\":33084},\"response\":{\"statusCode\":200,\"responseTime\":7261,\"contentLength\":331899}}\n","stream":"stdout","docker":{"container_id":"cb1bcdfed10b8aa82dcfb72c102f3c132b36a52f8e890dfd7b6af1160d995a38"},"kubernetes":{"namespace_name":"kube-system","pod_name":"kibana-logging-v1-e44vs","container_name":"kibana-logging"},"tag":"kubernetes.var.log.containers.kibana-logging-v1-e44vs_kube-system_kibana-logging-cb1bcdfed10b8aa82dcfb72c102f3c132b36a52f8e890dfd7b6af1160d995a38.log"}

@jimmidyson
Copy link
Contributor

I've tried to think of how this plugin could be stripping anything from logs but can't... Can you check the source document in Elasticsearch (I assume you're using that) to see if the time is correct in a nested/different field?

FYI the lack of support for subsecond precision is discussed in fluent/fluentd#461 & fluent/fluentd#653. Upgrading our environment to use fluentd v0.14.0.pre.1 immediately enabled subsecond precision logs.

@chris-codaio
Copy link
Author

I tried moving to 0.14.0.pre.1 with our migration to Kubernetes 1.2, but fluentd is in a restart loop with the following error...perhaps an output format change is breaking the plugin?

[[A2016-03-18 20:25:54 +0000 [error]: fluent/engine.rb:186:rescue in run: unexpected error error_class=NoMethodError error=#<NoMethodError: undefined method `status' for #<Hash:0x005572671ce510>>
  2016-03-18 20:25:54 +0000 [error]: fluent/supervisor.rb:165:block in start: /usr/local/lib/ruby/gems/2.3.0/gems/fluent-plugin-kubernetes_metadata_filter-0.17.0/lib/fluent/plugin/filter_kubernetes_metadata.rb:238:in `block in start_watch'
  2016-03-18 20:25:54 +0000 [error]: fluent/supervisor.rb:165:block in start: /usr/local/lib/ruby/gems/2.3.0/gems/kubeclient-0.4.0/lib/kubeclient/watch_stream.rb:27:in `block (2 levels) in each'
  2016-03-18 20:25:54 +0000 [error]: fluent/supervisor.rb:165:block in start: /usr/local/lib/ruby/2.3.0/net/protocol.rb:411:in `call_block'
  2016-03-18 20:25:54 +0000 [error]: fluent/supervisor.rb:165:block in start: /usr/local/lib/ruby/2.3.0/net/protocol.rb:402:in `<<'
  2016-03-18 20:25:54 +0000 [error]: fluent/supervisor.rb:165:block in start: /usr/local/lib/ruby/2.3.0/net/protocol.rb:108:in `read'
  2016-03-18 20:25:54 +0000 [error]: fluent/supervisor.rb:165:block in start: /usr/local/lib/ruby/2.3.0/net/http/response.rb:319:in `read_chunked'
  2016-03-18 20:25:54 +0000 [error]: fluent/supervisor.rb:165:block in start: /usr/local/lib/ruby/2.3.0/net/http/response.rb:283:in `block in read_body_0'
  2016-03-18 20:25:54 +0000 [error]: fluent/supervisor.rb:165:block in start: /usr/local/lib/ruby/2.3.0/net/http/response.rb:276:in `inflater'
  2016-03-18 20:25:54 +0000 [error]: fluent/supervisor.rb:165:block in start: /usr/local/lib/ruby/2.3.0/net/http/response.rb:281:in `read_body_0'
  2016-03-18 20:25:54 +0000 [error]: fluent/supervisor.rb:165:block in start: /usr/local/lib/ruby/2.3.0/net/http/response.rb:202:in `read_body'
  2016-03-18 20:25:54 +0000 [error]: fluent/supervisor.rb:165:block in start: /usr/local/lib/ruby/gems/2.3.0/gems/kubeclient-0.4.0/lib/kubeclient/watch_stream.rb:24:in `block in each'
  2016-03-18 20:25:54 +0000 [error]: fluent/supervisor.rb:165:block in start: /usr/local/lib/ruby/2.3.0/net/http.rb:1446:in `block in transport_request'
  2016-03-18 20:25:54 +0000 [error]: fluent/supervisor.rb:165:block in start: /usr/local/lib/ruby/2.3.0/net/http/response.rb:163:in `reading_body'
  2016-03-18 20:25:54 +0000 [error]: fluent/supervisor.rb:165:block in start: /usr/local/lib/ruby/2.3.0/net/http.rb:1445:in `transport_request'
  2016-03-18 20:25:54 +0000 [error]: fluent/supervisor.rb:165:block in start: /usr/local/lib/ruby/2.3.0/net/http.rb:1407:in `request'
  2016-03-18 20:25:54 +0000 [error]: fluent/supervisor.rb:165:block in start: /usr/local/lib/ruby/gems/2.3.0/gems/kubeclient-0.4.0/lib/kubeclient/watch_stream.rb:20:in `each'
  2016-03-18 20:25:54 +0000 [error]: fluent/supervisor.rb:165:block in start: /usr/local/lib/ruby/gems/2.3.0/gems/fluent-plugin-kubernetes_metadata_filter-0.17.0/lib/fluent/plugin/filter_kubernetes_metadata.rb:235:in `start_watch'
  2016-03-18 20:25:54 +0000 [error]: fluent/supervisor.rb:165:block in start: /usr/local/lib/ruby/gems/2.3.0/gems/fluent-plugin-kubernetes_metadata_filter-0.17.0/lib/fluent/plugin/filter_kubernetes_metadata.rb:140:in `block in configure'
2016-03-18 20:25:54 +0000 [info]: fluent/engine.rb:189:run: shutting down fluentd

@chris-codaio
Copy link
Author

By hacking the output plugin, I was able to get this working end-to-end, proving that it is not this filter causing the problem. :)

@dsapala
Copy link

dsapala commented Apr 18, 2017

@chrisleck how did you hack the output plugin? I'm having the same trouble and I don't quite understand what you did to make it work.

@chris-codaio
Copy link
Author

@dsapala
Copy link

dsapala commented Apr 19, 2017

Thanks, @chrisleck !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants