Skip to content

feat(component validation): add sink validator#17980

Merged
neuronull merged 31 commits intomasterfrom
neuronull/component_validation_sink_component_spec
Feb 5, 2024
Merged

feat(component validation): add sink validator#17980
neuronull merged 31 commits intomasterfrom
neuronull/component_validation_sink_component_spec

Conversation

@neuronull
Copy link
Contributor

@neuronull neuronull commented Jul 14, 2023

closes: #16840
closes: #16843
closes: #16844
closes: #16845
closes: #16616
addresses: #16842
addresses: #16847

This PR adds support validating the "happy path" metrics for sinks, which for the most part did not require much except enable those same checks we had already added for sources- but because so much of the logic was the same, this resulting PR is mostly a refactoring to reduce the duplicate code it would have introduced otherwise (and was already there, really).

Notes:

  • With the http sink being tested, it became apparent that setting expected metrics needs to be happening in the resources themselves (at least for some), as was anticipated previously. So part of the changes here move says, if it is a sink that we are testing, we need to calculate the expected values from the external resource itself.
  • There are a couple TODO items added around synchronization problems ... the crappy sleeps work for now but we are going need a more robust solution for that. It deserves it's own PR I think.

@neuronull neuronull added the domain: tests Anything related to Vector's internal tests label Jul 14, 2023
@neuronull neuronull self-assigned this Jul 14, 2023
@datadog-vectordotdev
Copy link

datadog-vectordotdev bot commented Jul 14, 2023

Datadog Report

Branch report: neuronull/component_validation_sink_component_spec
Commit report: aa129b6

vector: 0 Failed, 0 New Flaky, 1921 Passed, 0 Skipped, 1m 21.53s Wall Time

tobz
tobz previously requested changes Jul 19, 2023
Copy link
Contributor

@tobz tobz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A lot of good here overall, but needs a few changes for consistency.

Comment on lines 594 to 605
if !skip_output_driver_metrics {
if let Some(encoder) = maybe_encoder.as_ref() {
let mut buffer = BytesMut::new();
encoder
.clone()
.encode(output_event, &mut buffer)
.expect("should not fail to encode output event");

output_runner_metrics.received_events_total += 1;
output_runner_metrics.received_bytes_total += buffer.len() as u64;
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like if we're not skipping output driver metrics, then that would imply we have a sink component under test, which should be using a "real" output external resource... and that this logic should exist in there, similar to how you've changed things for collecting source metrics in input external resources.

I think it's way too confusing to have this here, because it means every external resource is having to pass back a reciprocal encoder to re-encode the event and track the encoded size instead of just... counting the actual bytes the external resource received in the first place.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes think this was just a temporary thing I added so that I could test the sink validators with the external resource setting the expected values on one end, without changing it for the others yet. But I think the final solution must have the external resources setting the expected values and not have this boolean flag at all.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like if we're not skipping output driver metrics, then that would imply we have a sink component under test, which should be using a "real" output external resource... and that this logic should exist in there, similar to how you've changed things for collecting source metrics in input external resources.

So, here if we are not skipping output driver metrics, it means it is a source or transform that is under test. This logic is needed because, in the source and transform case, we have a controlled edge (vector sink) getting the output and the output driver is the place where we have access to determining the expected values , but we don't want to do this when we are testing sinks, because in that case the external resource the sink is pushing to / pulling from, is where the expected values are available.

@neuronull neuronull dismissed tobz’s stale review July 19, 2023 20:25

changes made

Base automatically changed from neuronull/draft_component_validation_better_validation to master July 19, 2023 22:21
@netlify
Copy link

netlify bot commented Jul 19, 2023

Deploy Preview for vector-project canceled.

Name Link
🔨 Latest commit 006db51
🔍 Latest deploy log https://app.netlify.com/sites/vector-project/deploys/64b9b9b9dca1360008f8d790

@netlify
Copy link

netlify bot commented Jul 19, 2023

Deploy Preview for vrl-playground canceled.

Name Link
🔨 Latest commit 006db51
🔍 Latest deploy log https://app.netlify.com/sites/vrl-playground/deploys/64b9b9b9682db400084f3e4b

@neuronull neuronull marked this pull request as ready for review July 19, 2023 22:40
@neuronull neuronull added the no-changelog Changes in this PR do not need user-facing explanations in the release changelog label Jan 23, 2024
@datadog-vectordotdev
Copy link

datadog-vectordotdev bot commented Jan 23, 2024

Datadog Report

Branch report: neuronull/component_validation_sink_component_spec
Commit report: 353be92
Test service: vector

✅ 0 Failed, 2094 Passed, 0 Skipped, 1m 24.47s Wall Time

@neuronull neuronull requested a review from tobz January 29, 2024 15:52
Copy link
Contributor

@tobz tobz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, this is mostly making sense to me since it's really just wiring up validation of the metrics being generated rather than anything brand new, or anything being fundamentally changed about the validation architecture.

Mostly left a lot of comments about things that look like they could/should be cleaned up.

Comment on lines +179 to +184

// TODO - currently we are getting lucky in the testing of `http_client` source... if the source tries to query
// this server but we have already shut down the thread, then it will generate an error which can throw off our error
// validation.
// I think the solution involves adding synchronization to wait here for the runner to tell us to shutdown.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This makes sense, yeah.

I think what we probably want is to trigger the shutdown of the topology once we know all events on the input side have been sent or consumed. Once the events are in the topology, we should be "safe", in terms of knowing that the topology won't actually shutdown until those events make it through to the sink and are processed.

So for an HTTP server external resource, we'd want the shutdown signal to basically tell it to shutdown but only after all events have been sent to the source, which in this case would be essentially waiting until the http_client source has queried the HTTP server enough times, etc. For a push-based source, we'd only care that it managed to send out all events and that we'd gotten a response for all of them.

So we'd essentially update external input resources to have that behavior, and then the shutdown code in the runner would be like

  • trigger external input resource to shutdown
  • trigger topology to shutdown
  • wait for external input resource to shutdown/complete
  • wait for topology to shutdown/complete
  • rest of the steps, like wait for the external output resource, etc

Does that make sense?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This makes sense yes. I want to say I started down this path and then decided to table it for the time being. I will take a look at it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is isolated enough to put in a separate PR. I opened a ticket to track that. If it's alright I would probably tag you on the review for that as well as you understand the architecture of this framework.

Copy link
Contributor

@tobz tobz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, this mostly seems fine.

Almost everything I could leave nits about is really stuff that this PR isn't responsible for fixing, so I won't bother. :P

@neuronull neuronull added this pull request to the merge queue Feb 5, 2024
@github-actions
Copy link

github-actions bot commented Feb 5, 2024

Regression Detector Results

Run ID: a8822e4-c308-4556-9216-cffe8d4e8da0
Baseline: 0a2dc2b
Comparison: 28a4cb4
Total CPUs: 7

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

No significant changes in experiment optimization goals

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

There were no significant changes in experiment optimization goals at this confidence level and effect size tolerance.

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI
syslog_log2metric_humio_metrics ingress throughput +2.77 [+2.65, +2.90]
syslog_log2metric_splunk_hec_metrics ingress throughput +2.12 [+1.96, +2.27]
file_to_blackhole egress throughput +1.84 [-0.65, +4.33]
syslog_loki ingress throughput +0.78 [+0.69, +0.87]
otlp_http_to_blackhole ingress throughput +0.78 [+0.61, +0.95]
syslog_humio_logs ingress throughput +0.22 [+0.09, +0.35]
syslog_splunk_hec_logs ingress throughput +0.13 [+0.05, +0.21]
http_to_http_acks ingress throughput +0.11 [-1.21, +1.44]
http_to_http_noack ingress throughput +0.05 [-0.06, +0.16]
datadog_agent_remap_datadog_logs ingress throughput +0.04 [-0.04, +0.13]
http_to_http_json ingress throughput +0.01 [-0.06, +0.09]
splunk_hec_indexer_ack_blackhole ingress throughput +0.00 [-0.14, +0.14]
syslog_log2metric_tag_cardinality_limit_blackhole ingress throughput +0.00 [-0.11, +0.12]
splunk_hec_to_splunk_hec_logs_acks ingress throughput -0.00 [-0.16, +0.16]
otlp_grpc_to_blackhole ingress throughput -0.01 [-0.10, +0.08]
splunk_hec_to_splunk_hec_logs_noack ingress throughput -0.02 [-0.13, +0.10]
enterprise_http_to_http ingress throughput -0.09 [-0.17, -0.02]
http_to_s3 ingress throughput -0.10 [-0.38, +0.17]
syslog_regex_logs2metric_ddmetrics ingress throughput -0.12 [-0.21, -0.02]
splunk_hec_route_s3 ingress throughput -0.16 [-0.64, +0.32]
datadog_agent_remap_blackhole_acks ingress throughput -0.28 [-0.37, -0.18]
fluent_elasticsearch ingress throughput -0.46 [-0.92, -0.00]
datadog_agent_remap_datadog_logs_acks ingress throughput -0.49 [-0.56, -0.42]
http_elasticsearch ingress throughput -0.78 [-0.84, -0.72]
datadog_agent_remap_blackhole ingress throughput -0.82 [-0.91, -0.74]
http_text_to_http_json ingress throughput -1.14 [-1.26, -1.02]
socket_to_socket_blackhole ingress throughput -1.62 [-1.71, -1.53]

Explanation

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

Merged via the queue into master with commit 28a4cb4 Feb 5, 2024
@neuronull neuronull deleted the neuronull/component_validation_sink_component_spec branch February 5, 2024 18:29
AndrooTheChen pushed a commit to discord/vector that referenced this pull request Sep 23, 2024
* add fix and small refactor

* fix compilation errors

* 3 ticks

* dont compute expected metrics in validator

* cleanup

* cleanup

* clippy

* feedback tz: sent_eventssssss

* feedback tz: fix telemetry shutdown finishing logic

* 3 ticks

* small reorg to add sinks

* mini refactor of the component spec validators

* attempt to set expected values from the resource

* feedback tz- from not try_from

* back to 3 ticks

* fix incorrect expected values

* Even more reduction

* clippy

* add the discarded events total check

* workaround the new sync issues

* check events

* partial feedback

* thought i removed that

* use ref
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

domain: tests Anything related to Vector's internal tests no-changelog Changes in this PR do not need user-facing explanations in the release changelog

Projects

None yet

2 participants