feat(component validation): add sink error path validation + multi config#18062
feat(component validation): add sink error path validation + multi config#18062
Conversation
… into neuronull/component_validation_sink_component_spec
… into neuronull/component_validation_sink_component_spec
Datadog ReportBranch report: ✅ 0 Failed, 2105 Passed, 0 Skipped, 1m 23.8s Wall Time |
…o neuronull/component_validation_sink_sad_path
Datadog ReportBranch report: ✅ 0 Failed, 2118 Passed, 0 Skipped, 1m 24.77s Wall Time |
tobz
left a comment
There was a problem hiding this comment.
Overall, I think this makes sense and is an improvement on the current state of things.
I left some take-it-or-leave comments, mostly around how we define test case event data, that could be useful to generalize (IMO) the test case definitions in cases of more complex events or, in the future, when we test stuff like metric-specific components.
| event: EventData, | ||
| name: String, | ||
| value: Value, | ||
| fail: Option<bool>, |
There was a problem hiding this comment.
What scenarios do we have where we want to inject an additional field but don't want to have the event fail?
If that's common, then we probably want to add a variant to EventData that allows us to deserialize from a map, maybe something like...
#[serde(tag = "type", content = "data")]
enum EventData {
#[serde(rename = "log")]
Log(HashMap<String, Value>),
#[serde(untagged)]
RawLog(String),
}I think this would be sufficient to allow us to define events like so:
# Raw log message, like we do now:
events:
- my simple log message
# More advanced:
events:
- type: log
data:
message: my simple log message
level: "1"
# Bundled with something like an expected decoding failure:
events:
- fail_encoding_of:
type: log
data:
message: my simple log message
level: "1"Again, I think this would work, and perhaps more importantly (at least in my mind, but this point is just a loosely-held opinion) it would be somewhat clearer because in order to construct a more advanced event (beyond just the raw message), we wouldn't be limited to just the raw message and a single field to inject. This would end up letting us actually write event data that, for example, where a log event doesn't even have a message field, and so on. A future improvement could then be to make the failure inducing modes be their own enum type -- a variant for "should fail on its own", a variant for "should fail if we mess up the encoding", and so on -- and then that failure type could just be a dedicated field such that the event data definitions might end up looking like:
events:
- failure_mode: invalid_encoding
type: log
data:
message: my simple log message
- failure_mode: invalid_event
type: log
data:
message: my simple log message
level: "1"It was always sort of my plan to make EventData work this way, with the single string variant as an escape hatch for defining basic log events without any additional boilerplate while more advanced variants had dedicated variants, especially since we might eventually want to test metrics this way, and we'd need an answer for that.
All of that said, I realize we're focusing primarily on log-specific components right now and so this is purely a suggestion on how we might be able to make this a little more generalized out of the gate.
There was a problem hiding this comment.
What scenarios do we have where we want to inject an additional field but don't want to have the event fail?
Yeah that is a good point. I think when I made that optional I was thinking about making it flexible so we could hit specific code paths in the components. We don't really have a need for this right now plus as you pointed out it may be more useful for metrics cases.
The changes you suggest, make sense to me.
One thing to note (and I was planning to tell you this today 😅 ) , In my branch to fix the synchronization issues, I actually removed this code path for injecting specific fields. In that branch I ended up changing the error validation for sinks to not rely on the codecs for the errors, and instead generate it from the external resource. I felt this was more of a realistic scenario.
In any case, for all the reasons you mentioned and the ones I added, I'll leave this comment open and we can refer back to it if/when we need to add this functionality.
Thanks!
Regression Detector ResultsRun ID: a0b3860b-b639-4401-8452-c9adc951492b Performance changes are noted in the perf column of each table:
No significant changes in experiment optimization goalsConfidence level: 90.00% There were no significant changes in experiment optimization goals at this confidence level and effect size tolerance.
|
| perf | experiment | goal | Δ mean % | Δ mean % CI |
|---|---|---|---|---|
| ➖ | syslog_log2metric_humio_metrics | ingress throughput | +4.38 | [+4.24, +4.52] |
| ➖ | syslog_regex_logs2metric_ddmetrics | ingress throughput | +3.77 | [+3.65, +3.89] |
| ➖ | syslog_loki | ingress throughput | +3.10 | [+2.99, +3.20] |
| ➖ | syslog_humio_logs | ingress throughput | +2.92 | [+2.82, +3.03] |
| ➖ | syslog_splunk_hec_logs | ingress throughput | +2.04 | [+1.97, +2.11] |
| ➖ | splunk_hec_route_s3 | ingress throughput | +1.89 | [+1.37, +2.41] |
| ➖ | syslog_log2metric_splunk_hec_metrics | ingress throughput | +1.21 | [+1.07, +1.35] |
| ➖ | datadog_agent_remap_blackhole | ingress throughput | +1.03 | [+0.92, +1.14] |
| ➖ | syslog_log2metric_tag_cardinality_limit_blackhole | ingress throughput | +0.68 | [+0.55, +0.81] |
| ➖ | http_text_to_http_json | ingress throughput | +0.33 | [+0.19, +0.47] |
| ➖ | datadog_agent_remap_datadog_logs | ingress throughput | +0.26 | [+0.16, +0.35] |
| ➖ | datadog_agent_remap_blackhole_acks | ingress throughput | +0.18 | [+0.09, +0.27] |
| ➖ | http_to_http_noack | ingress throughput | +0.15 | [+0.06, +0.24] |
| ➖ | http_to_http_json | ingress throughput | +0.06 | [-0.02, +0.14] |
| ➖ | splunk_hec_indexer_ack_blackhole | ingress throughput | +0.00 | [-0.14, +0.15] |
| ➖ | splunk_hec_to_splunk_hec_logs_acks | ingress throughput | +0.00 | [-0.16, +0.16] |
| ➖ | splunk_hec_to_splunk_hec_logs_noack | ingress throughput | -0.05 | [-0.16, +0.07] |
| ➖ | enterprise_http_to_http | ingress throughput | -0.07 | [-0.15, +0.01] |
| ➖ | datadog_agent_remap_datadog_logs_acks | ingress throughput | -0.10 | [-0.18, -0.01] |
| ➖ | http_to_s3 | ingress throughput | -0.50 | [-0.78, -0.22] |
| ➖ | otlp_grpc_to_blackhole | ingress throughput | -0.55 | [-0.64, -0.46] |
| ➖ | fluent_elasticsearch | ingress throughput | -0.62 | [-1.10, -0.13] |
| ➖ | http_to_http_acks | ingress throughput | -0.75 | [-2.05, +0.56] |
| ➖ | otlp_http_to_blackhole | ingress throughput | -1.03 | [-1.17, -0.88] |
| ➖ | socket_to_socket_blackhole | ingress throughput | -1.30 | [-1.38, -1.21] |
| ➖ | file_to_blackhole | egress throughput | -2.32 | [-4.88, +0.25] |
| ➖ | http_elasticsearch | ingress throughput | -2.92 | [-2.99, -2.85] |
Explanation
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
…nfig (vectordotdev#18062) * add fix and small refactor * fix compilation errors * 3 ticks * dont compute expected metrics in validator * cleanup * cleanup * clippy * feedback tz: sent_eventssssss * feedback tz: fix telemetry shutdown finishing logic * 3 ticks * small reorg to add sinks * mini refactor of the component spec validators * attempt to set expected values from the resource * feedback tz- from not try_from * back to 3 ticks * fix incorrect expected values * Even more reduction * clippy * add the discarded events total check * workaround the new sync issues * multi config support * cleanup * check events * partial feedback * thought i removed that * use ref * feedback: dont introduce PassThroughFail variant * feedback: adjust enum variant names for clarity * feedback: no idea what I was thinking with `input_codec` * spell check * fr * feedback- update docs
closes: #16846
closes: #16847
ref: #18027
Notes:
ValidatableComponet, you draw a mapping from a config and that name. If the name match isn't found by the framework, it errors out. If no name in the test case is specified, it's ok it just uses the first one that it finds that doesn't have a name specified.