Skip to content

feat(component validation): add sink error path validation + multi config#18062

Merged
neuronull merged 43 commits intomasterfrom
neuronull/component_validation_sink_sad_path
Feb 22, 2024
Merged

feat(component validation): add sink error path validation + multi config#18062
neuronull merged 43 commits intomasterfrom
neuronull/component_validation_sink_sad_path

Conversation

@neuronull
Copy link
Contributor

closes: #16846
closes: #16847

ref: #18027

Notes:

  • The error metrics validation for sinks demonstrated that we need the ability to specify different component configs for different test cases. This is because, if we just use the same codec for the input runner and the component under test, it's not really possible to inject a failure unless we add a transform to the topology. Since it's a reasonable expectation that we might need to specify different config options to hit specific errors on some components (e.g. Data Volume) , that was selected as the way forward here.
    • The way this was implemented was by allowing the yaml config to set a "config_name" for a specific test case. And in the implementation of ValidatableComponet, you draw a mapping from a config and that name. If the name match isn't found by the framework, it errors out. If no name in the test case is specified, it's ok it just uses the first one that it finds that doesn't have a name specified.
  • This PR also demonstrates that the test runner is having to increasingly make some assumptions (eg. if we are testing a sink and expecting a failure, still expect to see component_receive_events etc. ). As we roll out to more components, I can see the possibility that those assumptions fall apart, or the logic gets overly complex. We may at some point need to re-evaluate the setting of expected values if that happens.

@neuronull neuronull added sink: http Anything `http` sink related domain: tests Anything related to Vector's internal tests domain: observability Anything related to monitoring/observing Vector labels Jul 21, 2023
@neuronull neuronull requested a review from tobz January 26, 2024 21:55
@neuronull neuronull added the no-changelog Changes in this PR do not need user-facing explanations in the release changelog label Jan 26, 2024
@datadog-vectordotdev
Copy link

datadog-vectordotdev bot commented Jan 26, 2024

Datadog Report

Branch report: neuronull/component_validation_sink_sad_path
Commit report: ea53261
Test service: vector

✅ 0 Failed, 2105 Passed, 0 Skipped, 1m 23.8s Wall Time

Base automatically changed from neuronull/component_validation_sink_component_spec to master February 5, 2024 18:29
@datadog-vectordotdev
Copy link

datadog-vectordotdev bot commented Feb 20, 2024

Datadog Report

Branch report: neuronull/component_validation_sink_sad_path
Commit report: e959d9c
Test service: vector

✅ 0 Failed, 2118 Passed, 0 Skipped, 1m 24.77s Wall Time

Copy link
Contributor

@tobz tobz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, I think this makes sense and is an improvement on the current state of things.

I left some take-it-or-leave comments, mostly around how we define test case event data, that could be useful to generalize (IMO) the test case definitions in cases of more complex events or, in the future, when we test stuff like metric-specific components.

event: EventData,
name: String,
value: Value,
fail: Option<bool>,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What scenarios do we have where we want to inject an additional field but don't want to have the event fail?

If that's common, then we probably want to add a variant to EventData that allows us to deserialize from a map, maybe something like...

#[serde(tag = "type", content = "data")]
enum EventData {
    #[serde(rename = "log")]
    Log(HashMap<String, Value>),
    
    #[serde(untagged)]
    RawLog(String),
}

I think this would be sufficient to allow us to define events like so:

# Raw log message, like we do now:
events:
- my simple log message

# More advanced:
events:
- type: log
  data:
    message: my simple log message
    level: "1"
    
# Bundled with something like an expected decoding failure:
events:
- fail_encoding_of:
    type: log
    data:
      message: my simple log message
      level: "1"

Again, I think this would work, and perhaps more importantly (at least in my mind, but this point is just a loosely-held opinion) it would be somewhat clearer because in order to construct a more advanced event (beyond just the raw message), we wouldn't be limited to just the raw message and a single field to inject. This would end up letting us actually write event data that, for example, where a log event doesn't even have a message field, and so on. A future improvement could then be to make the failure inducing modes be their own enum type -- a variant for "should fail on its own", a variant for "should fail if we mess up the encoding", and so on -- and then that failure type could just be a dedicated field such that the event data definitions might end up looking like:

events:
- failure_mode: invalid_encoding
  type: log
  data:
    message: my simple log message
- failure_mode: invalid_event
  type: log
  data:
    message: my simple log message
    level: "1"

It was always sort of my plan to make EventData work this way, with the single string variant as an escape hatch for defining basic log events without any additional boilerplate while more advanced variants had dedicated variants, especially since we might eventually want to test metrics this way, and we'd need an answer for that.

All of that said, I realize we're focusing primarily on log-specific components right now and so this is purely a suggestion on how we might be able to make this a little more generalized out of the gate.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What scenarios do we have where we want to inject an additional field but don't want to have the event fail?

Yeah that is a good point. I think when I made that optional I was thinking about making it flexible so we could hit specific code paths in the components. We don't really have a need for this right now plus as you pointed out it may be more useful for metrics cases.

The changes you suggest, make sense to me.

One thing to note (and I was planning to tell you this today 😅 ) , In my branch to fix the synchronization issues, I actually removed this code path for injecting specific fields. In that branch I ended up changing the error validation for sinks to not rely on the codecs for the errors, and instead generate it from the external resource. I felt this was more of a realistic scenario.

In any case, for all the reasons you mentioned and the ones I added, I'll leave this comment open and we can refer back to it if/when we need to add this functionality.

Thanks!

@github-actions
Copy link

Regression Detector Results

Run ID: a0b3860b-b639-4401-8452-c9adc951492b
Baseline: 695f847
Comparison: a6da1d8
Total CPUs: 7

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

No significant changes in experiment optimization goals

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

There were no significant changes in experiment optimization goals at this confidence level and effect size tolerance.

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI
syslog_log2metric_humio_metrics ingress throughput +4.38 [+4.24, +4.52]
syslog_regex_logs2metric_ddmetrics ingress throughput +3.77 [+3.65, +3.89]
syslog_loki ingress throughput +3.10 [+2.99, +3.20]
syslog_humio_logs ingress throughput +2.92 [+2.82, +3.03]
syslog_splunk_hec_logs ingress throughput +2.04 [+1.97, +2.11]
splunk_hec_route_s3 ingress throughput +1.89 [+1.37, +2.41]
syslog_log2metric_splunk_hec_metrics ingress throughput +1.21 [+1.07, +1.35]
datadog_agent_remap_blackhole ingress throughput +1.03 [+0.92, +1.14]
syslog_log2metric_tag_cardinality_limit_blackhole ingress throughput +0.68 [+0.55, +0.81]
http_text_to_http_json ingress throughput +0.33 [+0.19, +0.47]
datadog_agent_remap_datadog_logs ingress throughput +0.26 [+0.16, +0.35]
datadog_agent_remap_blackhole_acks ingress throughput +0.18 [+0.09, +0.27]
http_to_http_noack ingress throughput +0.15 [+0.06, +0.24]
http_to_http_json ingress throughput +0.06 [-0.02, +0.14]
splunk_hec_indexer_ack_blackhole ingress throughput +0.00 [-0.14, +0.15]
splunk_hec_to_splunk_hec_logs_acks ingress throughput +0.00 [-0.16, +0.16]
splunk_hec_to_splunk_hec_logs_noack ingress throughput -0.05 [-0.16, +0.07]
enterprise_http_to_http ingress throughput -0.07 [-0.15, +0.01]
datadog_agent_remap_datadog_logs_acks ingress throughput -0.10 [-0.18, -0.01]
http_to_s3 ingress throughput -0.50 [-0.78, -0.22]
otlp_grpc_to_blackhole ingress throughput -0.55 [-0.64, -0.46]
fluent_elasticsearch ingress throughput -0.62 [-1.10, -0.13]
http_to_http_acks ingress throughput -0.75 [-2.05, +0.56]
otlp_http_to_blackhole ingress throughput -1.03 [-1.17, -0.88]
socket_to_socket_blackhole ingress throughput -1.30 [-1.38, -1.21]
file_to_blackhole egress throughput -2.32 [-4.88, +0.25]
http_elasticsearch ingress throughput -2.92 [-2.99, -2.85]

Explanation

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

Merged via the queue into master with commit a6da1d8 Feb 22, 2024
@neuronull neuronull deleted the neuronull/component_validation_sink_sad_path branch February 22, 2024 17:57
AndrooTheChen pushed a commit to discord/vector that referenced this pull request Sep 23, 2024
…nfig (vectordotdev#18062)

* add fix and small refactor

* fix compilation errors

* 3 ticks

* dont compute expected metrics in validator

* cleanup

* cleanup

* clippy

* feedback tz: sent_eventssssss

* feedback tz: fix telemetry shutdown finishing logic

* 3 ticks

* small reorg to add sinks

* mini refactor of the component spec validators

* attempt to set expected values from the resource

* feedback tz- from not try_from

* back to 3 ticks

* fix incorrect expected values

* Even more reduction

* clippy

* add the discarded events total check

* workaround the new sync issues

* multi config support

* cleanup

* check events

* partial feedback

* thought i removed that

* use ref

* feedback: dont introduce PassThroughFail variant

* feedback: adjust enum variant names for clarity

* feedback: no idea what I was thinking with `input_codec`

* spell check

* fr

* feedback- update docs
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

domain: observability Anything related to monitoring/observing Vector domain: sinks Anything related to the Vector's sinks domain: sources Anything related to the Vector's sources domain: tests Anything related to Vector's internal tests no-changelog Changes in this PR do not need user-facing explanations in the release changelog sink: http Anything `http` sink related

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Validate sink component metric component_discarded_events_total Validate sink component metric component_errors_total

2 participants