Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Send exception occurred: data error #1847

Closed
felixzh2020 opened this issue Feb 6, 2018 · 2 comments
Closed

Send exception occurred: data error #1847

felixzh2020 opened this issue Feb 6, 2018 · 2 comments

Comments

@felixzh2020
Copy link

Check CONTRIBUTING guideline first and here is the list to help us investigate the problem.

  • fluentd or td-agent version.
    fluentd-0.14.20
  • Environment information, e.g. OS.
    centos in docker machine
  • Your configuration @type http port 8901 bind 0.0.0.0 body_size_limit 32m keepalive_timeout 10s add_remote_addr true
@type kafka_buffered brokers 192.168.1.1:9092 buffer_type file buffer_path /var/log/fluent/buffer-01/*.buffer

flush_interval 1s
buffer_chunk_limit 4m
buffer_queue_limit 43200

retry_wait 1s
max_retry_wait 60s
disable_retry_limit true

ruby-kafka producer options

max_send_retries 1
required_acks 1
compression_codec gzip
output_data_type json
output_include_time true
num_threads 1
#kafka_agg_max_bytes 1048576

# loglevel: trace, debug, info, warn, error log_level debug
  • Your problem explanation. If you have an error logs, write it together.
    2018-02-06 09:44:21 +0000 [warn]: #0 Send exception occurred: data error
    2018-02-06 09:44:21 +0000 [warn]: #0 Exception Backtrace : /usr/local/rvm/gems/ruby-2.4.0@fluentd/gems/ruby-kafka-0.4.0/lib/kafka/gzip_codec.rb:17:in close' /usr/local/rvm/gems/ruby-2.4.0@fluentd/gems/ruby-kafka-0.4.0/lib/kafka/gzip_codec.rb:17:incompress'
    /usr/local/rvm/gems/ruby-2.4.0@fluentd/gems/ruby-kafka-0.4.0/lib/kafka/compressor.rb:50:in block in compress_data' /usr/local/rvm/gems/ruby-2.4.0@fluentd/gems/ruby-kafka-0.4.0/lib/kafka/instrumenter.rb:21:ininstrument'
    /usr/local/rvm/gems/ruby-2.4.0@fluentd/gems/ruby-kafka-0.4.0/lib/kafka/compressor.rb:49:in compress_data' /usr/local/rvm/gems/ruby-2.4.0@fluentd/gems/ruby-kafka-0.4.0/lib/kafka/compressor.rb:34:incompress'
    /usr/local/rvm/gems/ruby-2.4.0@fluentd/gems/ruby-kafka-0.4.0/lib/kafka/produce_operation.rb:89:in block (2 levels) in send_buffered_messages' /usr/local/rvm/gems/ruby-2.4.0@fluentd/gems/ruby-kafka-0.4.0/lib/kafka/message_buffer.rb:44:inblock (2 levels) in each'
    /usr/local/rvm/gems/ruby-2.4.0@fluentd/gems/ruby-kafka-0.4.0/lib/kafka/message_buffer.rb:43:in each' /usr/local/rvm/gems/ruby-2.4.0@fluentd/gems/ruby-kafka-0.4.0/lib/kafka/message_buffer.rb:43:inblock in each'
    /usr/local/rvm/gems/ruby-2.4.0@fluentd/gems/ruby-kafka-0.4.0/lib/kafka/message_buffer.rb:42:in each' /usr/local/rvm/gems/ruby-2.4.0@fluentd/gems/ruby-kafka-0.4.0/lib/kafka/message_buffer.rb:42:ineach'
    /usr/local/rvm/gems/ruby-2.4.0@fluentd/gems/ruby-kafka-0.4.0/lib/kafka/produce_operation.rb:87:in block in send_buffered_messages' /usr/local/rvm/gems/ruby-2.4.0@fluentd/gems/ruby-kafka-0.4.0/lib/kafka/produce_operation.rb:81:ineach'
    /usr/local/rvm/gems/ruby-2.4.0@fluentd/gems/ruby-kafka-0.4.0/lib/kafka/produce_operation.rb:81:in send_buffered_messages' /usr/local/rvm/gems/ruby-2.4.0@fluentd/gems/ruby-kafka-0.4.0/lib/kafka/produce_operation.rb:47:inblock in execute'
    /usr/local/rvm/gems/ruby-2.4.0@fluentd/gems/ruby-kafka-0.4.0/lib/kafka/instrumenter.rb:21:in instrument' /usr/local/rvm/gems/ruby-2.4.0@fluentd/gems/ruby-kafka-0.4.0/lib/kafka/produce_operation.rb:41:inexecute'
    /usr/local/rvm/gems/ruby-2.4.0@fluentd/gems/ruby-kafka-0.4.0/lib/kafka/producer.rb:301:in block in deliver_messages_with_retries' /usr/local/rvm/gems/ruby-2.4.0@fluentd/gems/ruby-kafka-0.4.0/lib/kafka/producer.rb:293:inloop'
    /usr/local/rvm/gems/ruby-2.4.0@fluentd/gems/ruby-kafka-0.4.0/lib/kafka/producer.rb:293:in deliver_messages_with_retries' /usr/local/rvm/gems/ruby-2.4.0@fluentd/gems/ruby-kafka-0.4.0/lib/kafka/producer.rb:243:inblock in deliver_messages'
    /usr/local/rvm/gems/ruby-2.4.0@fluentd/gems/ruby-kafka-0.4.0/lib/kafka/instrumenter.rb:21:in instrument' /usr/local/rvm/gems/ruby-2.4.0@fluentd/gems/ruby-kafka-0.4.0/lib/kafka/producer.rb:236:indeliver_messages'
    /usr/local/rvm/gems/ruby-2.4.0@fluentd/gems/fluent-plugin-kafka-0.6.0/lib/fluent/plugin/out_kafka_buffered.rb:246:in deliver_messages' /usr/local/rvm/gems/ruby-2.4.0@fluentd/gems/fluent-plugin-kafka-0.6.0/lib/fluent/plugin/out_kafka_buffered.rb:309:inwrite'
    /usr/local/rvm/gems/ruby-2.4.0@fluentd/gems/fluentd-0.14.20/lib/fluent/compat/output.rb:131:in write' /usr/local/rvm/gems/ruby-2.4.0@fluentd/gems/fluentd-0.14.20/lib/fluent/plugin/output.rb:1043:intry_flush'
    /usr/local/rvm/gems/ruby-2.4.0@fluentd/gems/fluentd-0.14.20/lib/fluent/plugin/output.rb:1268:in flush_thread_run' /usr/local/rvm/gems/ruby-2.4.0@fluentd/gems/fluentd-0.14.20/lib/fluent/plugin/output.rb:420:inblock (2 levels) in start'
    /usr/local/rvm/gems/ruby-2.4.0@fluentd/gems/fluentd-0.14.20/lib/fluent/plugin_helper/thread.rb:78:in `block in thread_create'
    2018-02-06 09:44:21 +0000 [info]: #0 initialized kafka producer: kafka
    2018-02-06 09:44:21 +0000 [debug]: #0 taking back chunk for errors. chunk="5648805a66fd415d7ee306278c3d0f39"
    2018-02-06 09:44:21 +0000 [warn]: #0 failed to flush the buffer. retry_time=0 next_retry_seconds=2018-02-06 09:44:22 +0000 chunk="5648805a66fd415d7ee306278c3d0f39" error_class=Zlib::DataError error="data error"
    2018-02-06 09:44:21 +0000 [warn]: #0 suppressed same stacktrace
    2018-02-06 09:44:22 +0000 [warn]: #0 retry succeeded. chunk_id="5648805a66fd415d7ee306278c3d0f39"
@repeatedly
Copy link
Member

repeatedly commented Feb 6, 2018

This seems kafka plugin issue and error happens inside ruby-kafka.
So the problem is communicating between ruby-kafka and your kafka cluster.

@felixzh2020
Copy link
Author

ok, thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants