feat(component validation): add sink validator#17980
Conversation
Datadog ReportBranch report: ✅ |
tobz
left a comment
There was a problem hiding this comment.
A lot of good here overall, but needs a few changes for consistency.
| if !skip_output_driver_metrics { | ||
| if let Some(encoder) = maybe_encoder.as_ref() { | ||
| let mut buffer = BytesMut::new(); | ||
| encoder | ||
| .clone() | ||
| .encode(output_event, &mut buffer) | ||
| .expect("should not fail to encode output event"); | ||
|
|
||
| output_runner_metrics.received_events_total += 1; | ||
| output_runner_metrics.received_bytes_total += buffer.len() as u64; | ||
| } | ||
| } |
There was a problem hiding this comment.
It seems like if we're not skipping output driver metrics, then that would imply we have a sink component under test, which should be using a "real" output external resource... and that this logic should exist in there, similar to how you've changed things for collecting source metrics in input external resources.
I think it's way too confusing to have this here, because it means every external resource is having to pass back a reciprocal encoder to re-encode the event and track the encoded size instead of just... counting the actual bytes the external resource received in the first place.
There was a problem hiding this comment.
Yes think this was just a temporary thing I added so that I could test the sink validators with the external resource setting the expected values on one end, without changing it for the others yet. But I think the final solution must have the external resources setting the expected values and not have this boolean flag at all.
There was a problem hiding this comment.
It seems like if we're not skipping output driver metrics, then that would imply we have a sink component under test, which should be using a "real" output external resource... and that this logic should exist in there, similar to how you've changed things for collecting source metrics in input external resources.
So, here if we are not skipping output driver metrics, it means it is a source or transform that is under test. This logic is needed because, in the source and transform case, we have a controlled edge (vector sink) getting the output and the output driver is the place where we have access to determining the expected values , but we don't want to do this when we are testing sinks, because in that case the external resource the sink is pushing to / pulling from, is where the expected values are available.
… into neuronull/component_validation_sink_component_spec
… into neuronull/component_validation_sink_component_spec
✅ Deploy Preview for vector-project canceled.
|
✅ Deploy Preview for vrl-playground canceled.
|
Datadog ReportBranch report: ✅ 0 Failed, 2094 Passed, 0 Skipped, 1m 24.47s Wall Time |
tobz
left a comment
There was a problem hiding this comment.
Overall, this is mostly making sense to me since it's really just wiring up validation of the metrics being generated rather than anything brand new, or anything being fundamentally changed about the validation architecture.
Mostly left a lot of comments about things that look like they could/should be cleaned up.
|
|
||
| // TODO - currently we are getting lucky in the testing of `http_client` source... if the source tries to query | ||
| // this server but we have already shut down the thread, then it will generate an error which can throw off our error | ||
| // validation. | ||
| // I think the solution involves adding synchronization to wait here for the runner to tell us to shutdown. | ||
|
|
There was a problem hiding this comment.
This makes sense, yeah.
I think what we probably want is to trigger the shutdown of the topology once we know all events on the input side have been sent or consumed. Once the events are in the topology, we should be "safe", in terms of knowing that the topology won't actually shutdown until those events make it through to the sink and are processed.
So for an HTTP server external resource, we'd want the shutdown signal to basically tell it to shutdown but only after all events have been sent to the source, which in this case would be essentially waiting until the http_client source has queried the HTTP server enough times, etc. For a push-based source, we'd only care that it managed to send out all events and that we'd gotten a response for all of them.
So we'd essentially update external input resources to have that behavior, and then the shutdown code in the runner would be like
- trigger external input resource to shutdown
- trigger topology to shutdown
- wait for external input resource to shutdown/complete
- wait for topology to shutdown/complete
- rest of the steps, like wait for the external output resource, etc
Does that make sense?
There was a problem hiding this comment.
This makes sense yes. I want to say I started down this path and then decided to table it for the time being. I will take a look at it.
There was a problem hiding this comment.
I think this is isolated enough to put in a separate PR. I opened a ticket to track that. If it's alright I would probably tag you on the review for that as well as you understand the architecture of this framework.
tobz
left a comment
There was a problem hiding this comment.
Overall, this mostly seems fine.
Almost everything I could leave nits about is really stuff that this PR isn't responsible for fixing, so I won't bother. :P
Regression Detector ResultsRun ID: a8822e4-c308-4556-9216-cffe8d4e8da0 Performance changes are noted in the perf column of each table:
No significant changes in experiment optimization goalsConfidence level: 90.00% There were no significant changes in experiment optimization goals at this confidence level and effect size tolerance.
|
| perf | experiment | goal | Δ mean % | Δ mean % CI |
|---|---|---|---|---|
| ➖ | syslog_log2metric_humio_metrics | ingress throughput | +2.77 | [+2.65, +2.90] |
| ➖ | syslog_log2metric_splunk_hec_metrics | ingress throughput | +2.12 | [+1.96, +2.27] |
| ➖ | file_to_blackhole | egress throughput | +1.84 | [-0.65, +4.33] |
| ➖ | syslog_loki | ingress throughput | +0.78 | [+0.69, +0.87] |
| ➖ | otlp_http_to_blackhole | ingress throughput | +0.78 | [+0.61, +0.95] |
| ➖ | syslog_humio_logs | ingress throughput | +0.22 | [+0.09, +0.35] |
| ➖ | syslog_splunk_hec_logs | ingress throughput | +0.13 | [+0.05, +0.21] |
| ➖ | http_to_http_acks | ingress throughput | +0.11 | [-1.21, +1.44] |
| ➖ | http_to_http_noack | ingress throughput | +0.05 | [-0.06, +0.16] |
| ➖ | datadog_agent_remap_datadog_logs | ingress throughput | +0.04 | [-0.04, +0.13] |
| ➖ | http_to_http_json | ingress throughput | +0.01 | [-0.06, +0.09] |
| ➖ | splunk_hec_indexer_ack_blackhole | ingress throughput | +0.00 | [-0.14, +0.14] |
| ➖ | syslog_log2metric_tag_cardinality_limit_blackhole | ingress throughput | +0.00 | [-0.11, +0.12] |
| ➖ | splunk_hec_to_splunk_hec_logs_acks | ingress throughput | -0.00 | [-0.16, +0.16] |
| ➖ | otlp_grpc_to_blackhole | ingress throughput | -0.01 | [-0.10, +0.08] |
| ➖ | splunk_hec_to_splunk_hec_logs_noack | ingress throughput | -0.02 | [-0.13, +0.10] |
| ➖ | enterprise_http_to_http | ingress throughput | -0.09 | [-0.17, -0.02] |
| ➖ | http_to_s3 | ingress throughput | -0.10 | [-0.38, +0.17] |
| ➖ | syslog_regex_logs2metric_ddmetrics | ingress throughput | -0.12 | [-0.21, -0.02] |
| ➖ | splunk_hec_route_s3 | ingress throughput | -0.16 | [-0.64, +0.32] |
| ➖ | datadog_agent_remap_blackhole_acks | ingress throughput | -0.28 | [-0.37, -0.18] |
| ➖ | fluent_elasticsearch | ingress throughput | -0.46 | [-0.92, -0.00] |
| ➖ | datadog_agent_remap_datadog_logs_acks | ingress throughput | -0.49 | [-0.56, -0.42] |
| ➖ | http_elasticsearch | ingress throughput | -0.78 | [-0.84, -0.72] |
| ➖ | datadog_agent_remap_blackhole | ingress throughput | -0.82 | [-0.91, -0.74] |
| ➖ | http_text_to_http_json | ingress throughput | -1.14 | [-1.26, -1.02] |
| ➖ | socket_to_socket_blackhole | ingress throughput | -1.62 | [-1.71, -1.53] |
Explanation
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
* add fix and small refactor * fix compilation errors * 3 ticks * dont compute expected metrics in validator * cleanup * cleanup * clippy * feedback tz: sent_eventssssss * feedback tz: fix telemetry shutdown finishing logic * 3 ticks * small reorg to add sinks * mini refactor of the component spec validators * attempt to set expected values from the resource * feedback tz- from not try_from * back to 3 ticks * fix incorrect expected values * Even more reduction * clippy * add the discarded events total check * workaround the new sync issues * check events * partial feedback * thought i removed that * use ref
closes: #16840
closes: #16843
closes: #16844
closes: #16845
closes: #16616
addresses: #16842
addresses: #16847
This PR adds support validating the "happy path" metrics for sinks, which for the most part did not require much except enable those same checks we had already added for sources- but because so much of the logic was the same, this resulting PR is mostly a refactoring to reduce the duplicate code it would have introduced otherwise (and was already there, really).
Notes:
httpsink being tested, it became apparent that setting expected metrics needs to be happening in the resources themselves (at least for some), as was anticipated previously. So part of the changes here move says, if it is a sink that we are testing, we need to calculate the expected values from the external resource itself.