[kafka_log] Update integration to input type to provide multi-signal support#18266
Conversation
Vale Linting ResultsSummary: 3 suggestions found 💡 Suggestions (3)
The Vale linter checks documentation changes against the Elastic Docs style guide. To use Vale locally or report issues, refer to Elastic style guide for Vale. |
|
Could you also add the recommended pipeline tests as part the PR. ++ @stefans-elastic for reviewing the PR. |
🚀 Benchmarks reportTo see the full report comment with |
|
/test |
⏳ Build in-progress, with failures
Failed CI Steps
History
|
|
I am not 100% clear on the test failures for the integration test for kafka_log - https://buildkite.com/elastic/integrations/builds/41142#019d7164-02ba-44b9-96d7-9e5ed4848465, as a local test succeeds. |
|
/test Check integrations kafka_log |
|
/test stack 9.3.2 |
|
@stefans-elastic, any pointers on the integration test error or other changes that are required to progress this?
|
|
Unfortunately I wasn't able to test it locally (because Root Cause: kafka_log.metrics CI test failure In CI, all datastream tests in a package share the same Kafka broker, while locally each test gets its own fresh broker. The generic test runs first and consumes all messages from Fix: At minimum, change group_id: system_test → group_id: system_test_metrics in data_stream/metrics/_dev/test/system/test-kafka-config.yml. Ideally also add a dedicated kafka-metrics |
@stefans-elastic, that was already done, but the results were the same - c787fbb Both local logs and the CI test logs suggest new containers were created.
I will nevertheless update the consumer group and see if the CI improves. |
|
@stefans-elastic , any insight? The test container is now at the data stream level, the test data is now tailored, there is a different topic and consumer group. I've reduced the |
|
@adrianchen-es not much yet. But I've figured a way to run the tests locally the way they are run in CI (and they indeed fail). I'm trying to troubleshoot. then run the command: |
|
@adrianchen-es Comments on the changes (AI generated):manifest.yml: type: metrics → type: logs The Filebeat kafka input is classified by Fleet as a logs-type input regardless of what the manifest declares. Fleet's processor always injects data_stream.type: logs into every event. manifest.yml: remove source_mode: synthetic and index_mode: time_series index_mode: time_series (TSDB) requires every document to contain all routing dimension fields — missing dimensions are rejected. source_mode: synthetic reconstructs _source from stored fields.yml: remove dimension: true and metric_type: gauge dimension: true is the TSDB routing path annotation — it only makes sense when index_mode: time_series is active. With TSDB removed, the field is just a plain keyword. metric_type: gauge test-kafka_metric-config.yml: wait_for_data_timeout: 30s → 5m On a fresh stack, the test environment needs to: start Kafka, produce messages, have Kibana finish uploading the package (component templates + ingest pipeline), have Fleet push the |
|
Hey @stefans-elastic . If the failure is due to hardcoding in filebeat, it should also fail in my local environment? Given it isn't customised. I'll tweak it aligning to your patch and see what the CI test returns with |
| show_user: false | ||
| - name: isolation_level | ||
| type: text | ||
| title: Isolation Level |
There was a problem hiding this comment.
i validated that this config is an exact copy-paste of the orignal data stream manifest, and it is, with the exception of this one line, which was an error in the original 😄. nice !
|
One thing which we should verify is that the upgrade scenario works. Since the input config file moved from datastream level to the root level of the folder. @stefans-elastic : Could you pls help with this validation ? |
tommyers-elastic
left a comment
There was a problem hiding this comment.
i'm generally happy with this change. we just need to validate that existing package users can upgrade to the new input package with no breaking changes or weird behaviour.
the only other thing that's a potential blocker for me is the kibana compatibility changes.
💚 Build Succeeded
History
|
|
/test stack 8.19.14 |
|
1 nit, else looks good! |
|
@ishleenk17 , @tommyers-elastic or @stefans-elastic Could I please get a final look before I merge? |
|
Thanks @ishleenk17 |
Co-authored-by: Ishleen Kaur <102962586+ishleenk17@users.noreply.github.com>
💚 Build Succeeded
History
|
|
Package kafka_log - 2.0.0 containing this change is available at https://epr.elastic.co/package/kafka_log/2.0.0/ |






Proposed commit message
Enhance the Kafka log integration to allow ingestion of metrics.
The pattern implemented aligns with other integrations that allow both log and metric ingestions.
A dynamic type is not used as it will require existing users to recreate the integration whereas a separate datastream reduces the potential complication and allows existing integrations to enable metrics or switch.
Checklist
changelog.ymlfile.Author's Checklist
How to test this PR locally
Related issues
Closes #18264
Screenshots