Skip to content

feat(amqp): added integration test for TLS#17219

Closed
dkhokhlov wants to merge 1 commit intovectordotdev:masterfrom
dkhokhlov:master
Closed

feat(amqp): added integration test for TLS#17219
dkhokhlov wants to merge 1 commit intovectordotdev:masterfrom
dkhokhlov:master

Conversation

@dkhokhlov
Copy link

@dkhokhlov dkhokhlov commented Apr 26, 2023

feat(amqp): added integration test for TLS

@dkhokhlov dkhokhlov requested a review from a team April 26, 2023 01:34
@bits-bot
Copy link

bits-bot commented Apr 26, 2023

CLA assistant check
All committers have signed the CLA.

@netlify
Copy link

netlify bot commented Apr 26, 2023

Deploy Preview for vector-project canceled.

Name Link
🔨 Latest commit ad57cf2
🔍 Latest deploy log https://app.netlify.com/sites/vector-project/deploys/6448d5691d2b1c0008602857

@netlify
Copy link

netlify bot commented Apr 26, 2023

Deploy Preview for vrl-playground canceled.

Name Link
🔨 Latest commit ad57cf2
🔍 Latest deploy log https://app.netlify.com/sites/vrl-playground/deploys/6448d5694e231800082749b2

@github-actions github-actions bot added domain: ci Anything related to Vector's CI environment domain: sources Anything related to the Vector's sources labels Apr 26, 2023
@dkhokhlov dkhokhlov force-pushed the master branch 5 times, most recently from 4f8f30d to 762401b Compare April 26, 2023 03:40
@dkhokhlov
Copy link
Author

dkhokhlov commented Apr 26, 2023

@jonathanpv
do you know how to specify the test runner dependency on integration test service using new vdev approach?
in previous vector versions the runner service was explicitly listed in docker compose yaml so it was possible to specify that dependency explicitly on one of integration test services with healthcheck.

context

- added integration test for TLS
- new test certs for rabbitmq

Ref: LOG-16435


Signed-off-by: Dmitri Khokhlov <dkhokhlov@gmail.com>
@dkhokhlov dkhokhlov changed the title feat(sources): added integration test for TLS feat(amqp): added integration test for TLS Apr 26, 2023
@jonathanpv
Copy link
Contributor

jonathanpv commented Apr 26, 2023

@dkhokhlov
You're right we used to have the runner block inside the compose files, but now it is being handled programmatically by using vdev/src/testing/runner.rs

Not 100% certain how runner.rs handles the dependency but it seems like another integration tests manages to use the healthcheck and depends_on blocks 5d88655#diff-b33309bec47f62c644c45d1469144c7e813d59cc7f3deebf74812bde8566cb66L18-L21

this one might be similar to what you're trying to accomplish:
5d88655#diff-9a1e1aff25c23761c3503fde754610303dd80a4c883689bf5fb64885eec2aa9eL13-R14

Here's the relevant block for what the runner.rs file does to prepare, and launch the containers.
https://github.com/vectordotdev/vector/pull/16981/files#diff-c8fdcce747feb38b68bce76cb7c103db574e868a27a854603aa10f31ee7a0099L142-R146

The code that handles the dependency thought might be here:
In vdev/src/testing/integration.rs impl Compose > run() we call docker compose on the compose file that is generated from the injected blocks.
https://github.com/vectordotdev/vector/blob/master/vdev/src/testing/integration.rs#L199

Later on we plan on injecting the runner block in integrations.rs and potentially removing some code from runner.rs. But that has yet to be implemented.

@github-actions
Copy link

Regression Detector Results

Run ID: bf2e8a78-69f7-43e9-9fab-e323b4f72a06
Baseline: 410aa3c
Comparison: ad57cf2
Total vector CPUs: 7

Explanation

A regression test is an integrated performance test for vector in a repeatable rig, with varying configuration for vector. What follows is a statistical summary of a brief vector run for each configuration across SHAs given above. The goal of these tests are to determine quickly if vector performance is changed and to what degree by a pull request.

Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval.

We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:

  1. The estimated |Δ mean %| ≥ 5.00%. This criterion intends to answer the question "Does the estimated change in mean optimization goal performance have a meaningful impact on your customers?". We assume that when |Δ mean %| < 5.00%, the impact on your customers is not meaningful. We also assume that a performance change in optimization goal is worth investigating whether it is an increase or decrease, so long as the magnitude of the change is sufficiently large.

  2. Zero is not in the 90.00% confidence interval "Δ mean % CI" about "Δ mean %". This statement is equivalent to saying that there is at least a 90.00% chance that the mean difference in optimization goal is not zero. This criterion intends to answer the question, "Is there a statistically significant difference in mean optimization goal performance?". It also means there is no more than a 10.00% chance this criterion reports a statistically significant difference when the true difference in mean optimization goal is zero -- a "false positive". We assume you are willing to accept a 10.00% chance of inaccurately detecting a change in performance when no true difference exists.

The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed.

No interesting changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%.

Fine details of change detection per experiment.
experiment goal Δ mean % Δ mean % CI confidence
syslog_regex_logs2metric_ddmetrics ingress throughput +3.87 [+3.62, +4.12] 100.00%
file_to_blackhole egress throughput +3.82 [+0.31, +7.32] 83.73%
http_text_to_http_json ingress throughput +3.16 [+3.09, +3.22] 100.00%
syslog_log2metric_splunk_hec_metrics ingress throughput +1.33 [+1.24, +1.41] 100.00%
syslog_loki ingress throughput +1.18 [+1.09, +1.27] 100.00%
syslog_humio_logs ingress throughput +1.17 [+1.09, +1.25] 100.00%
otlp_http_to_blackhole ingress throughput +0.89 [+0.74, +1.04] 100.00%
syslog_splunk_hec_logs ingress throughput +0.87 [+0.80, +0.94] 100.00%
http_to_http_json ingress throughput +0.78 [+0.71, +0.85] 100.00%
otlp_grpc_to_blackhole ingress throughput +0.72 [+0.61, +0.83] 100.00%
socket_to_socket_blackhole ingress throughput +0.56 [+0.50, +0.62] 100.00%
enterprise_http_to_http ingress throughput +0.02 [-0.01, +0.05] 53.18%
http_to_http_noack ingress throughput +0.02 [-0.04, +0.08] 25.45%
fluent_elasticsearch ingress throughput +0.00 [-0.00, +0.00] 68.56%
splunk_hec_indexer_ack_blackhole ingress throughput -0.00 [-0.04, +0.04] 1.04%
splunk_hec_to_splunk_hec_logs_acks ingress throughput -0.01 [-0.07, +0.06] 9.45%
splunk_hec_to_splunk_hec_logs_noack ingress throughput -0.01 [-0.05, +0.04] 14.19%
splunk_hec_route_s3 ingress throughput -0.21 [-0.35, -0.07] 95.16%
datadog_agent_remap_datadog_logs_acks ingress throughput -0.51 [-0.60, -0.42] 100.00%
http_to_http_acks ingress throughput -0.91 [-2.12, +0.30] 66.67%
datadog_agent_remap_datadog_logs ingress throughput -1.26 [-1.36, -1.16] 100.00%
syslog_log2metric_humio_metrics ingress throughput -1.62 [-1.71, -1.53] 100.00%
datadog_agent_remap_blackhole_acks ingress throughput -2.10 [-2.19, -2.01] 100.00%
datadog_agent_remap_blackhole ingress throughput -2.27 [-2.38, -2.16] 100.00%

volumes:
- ${PWD}:/code

wait_for_rabbitmq:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thinking about it, I'm not actually sure why this is necessary now? Does it take longer for a TLS enabled rabbitmq to start up?

If a pause is necessary, it might be easier to put the pause and run the healthcheck at the start of the test in the rust code.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was needed even for test w/o TLS, in old int test framework. without the wait the test will fail because rabbitmq can be not ready by the time the test runs. especially when docker build kit is enabled. It seems it is still needed in new test framework. rabbitmq image itself does not support healthcheck. I had to add the second service for that. Going to look into rs approach you mentioned.

@jszwedko jszwedko added the meta: awaiting author Pull requests that are awaiting their author. label May 18, 2023
@jszwedko jszwedko requested a review from StephenWakely October 6, 2023 21:24
@jszwedko jszwedko removed the meta: awaiting author Pull requests that are awaiting their author. label Oct 6, 2023
@StephenWakely
Copy link
Contributor

Closing in favour of #18813.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

domain: ci Anything related to Vector's CI environment domain: sources Anything related to the Vector's sources

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants