enhancement(dev): Add dedicated dev tool#14990
enhancement(dev): Add dedicated dev tool#14990ofek wants to merge 19 commits intovectordotdev:masterfrom
Conversation
✅ Deploy Preview for vector-project ready!
To edit notification comments on pull requests, go to your Netlify site settings. |
❌ Deploy Preview for vrl-playground failed.
|
| command.arg("--features"); | ||
| if !self.feature.is_empty() { | ||
| command.args([self.feature.join(",")]); | ||
| } else { | ||
| if app.platform.windows() { | ||
| command.arg("default-msvc"); | ||
| } else { | ||
| command.arg("default"); | ||
| } | ||
| }; |
There was a problem hiding this comment.
You may also find script/features interesting, to pull the required features out of a config file based on the components that are in use.
There was a problem hiding this comment.
Ah nice, I will incorporate that soon
neuronull
left a comment
There was a problem hiding this comment.
Nice! I checked out the branch and played around with it a bit.
Co-authored-by: neuronull <kyle.criddle@datadoghq.com>
| vdev exec ls | ||
| ``` | ||
|
|
||
| ### Starship |
There was a problem hiding this comment.
This is a really nice touch!
Co-authored-by: Stephen Wakely <stephen@lisp.space>
| if let Some(extra_args) = &self.args { | ||
| args.extend(extra_args.to_owned()); | ||
|
|
||
| if !(self.container || extra_args.contains(&"--features".to_string())) { |
There was a problem hiding this comment.
"blah"..to_string() is basically just "blah", so this should work:
| if !(self.container || extra_args.contains(&"--features".to_string())) { | |
| if !(self.container || extra_args.contains("--features")) { |
Co-authored-by: Stephen Wakely <stephen@lisp.space>
Regression Test Results
Baseline: 408b59e Explanation
A regression test is an integrated performance test for
The table below, if present, lists those experiments that have experienced a
statistically significant change in their Changes in
Fine details of change detection per experiment.
|
Soak Test ResultsBaseline: 408b59e ExplanationA soak test is an integrated performance test for vector in a repeatable rig, with varying configuration for vector. What follows is a statistical summary of a brief vector run for each configuration across SHAs given above. The goal of these tests are to determine, quickly, if vector performance is changed and to what degree by a pull request. Where appropriate units are scaled per-core. The table below, if present, lists those experiments that have experienced a statistically significant change in their throughput performance between baseline and comparision SHAs, with 90.0% confidence OR have been detected as newly erratic. Negative values mean that baseline is faster, positive comparison. Results that do not exhibit more than a ±8.87% change in mean throughput are discarded. An experiment is erratic if its coefficient of variation is greater than 0.3. The abbreviated table will be omitted if no interesting changes are observed. No interesting changes in throughput with confidence ≥ 90.00% and absolute Δ mean >= ±8.87%: Fine details of change detection per experiment.
|
|
Any update on this? |
|
🤔 I thought we had merged it? or was that just the RFC? My assumption was they were a "package" and we'd merge both - @jszwedko is this good to merge? |
|
Bruce had some thoughts about making the definitions of integration tests a bit less verbose that he was going to explore in a separate branch for comparison. |
bruceg
left a comment
There was a problem hiding this comment.
With the exception of integration testing as a set of external executables, I approve of this approach. I left a number of comments about smaller issues below, and will be tackling some of the code refactoring work myself in another branch.
| } else { | ||
| match env::current_dir() { | ||
| Ok(p) => p.display().to_string(), | ||
| Err(_) => ".".to_string(), |
There was a problem hiding this comment.
Why is a failure to determine the current working directory papered over here?
There was a problem hiding this comment.
. works fine as a fallback
There was a problem hiding this comment.
If . works as a fallback, do we need the full path at all?
vdev/src/platform.rs
Outdated
| pub const fn windows() -> bool { | ||
| cfg!(target_os = "windows") | ||
| } | ||
|
|
||
| #[allow(dead_code)] | ||
| pub const fn macos() -> bool { | ||
| cfg!(target_os = "macos") | ||
| } | ||
|
|
||
| #[allow(dead_code)] | ||
| pub const fn unix() -> bool { | ||
| cfg!(not(any(target_os = "windows", target_os = "macos"))) | ||
| } |
There was a problem hiding this comment.
I don't think these add much value over using the appropriate cfg directly. That is, platform::windows() is not less cognitive overhead over cfg!(windows) (the target_os is unnecessary here, and we have standardized on not using it #15000). This is particularly true in places where we can outright skip emitting the code with #[cfg(windows)].
There was a problem hiding this comment.
cfg!(windows) (the
target_osis unnecessary here, and we have standardized on not using it #15000)
Fixed!
I don't think these add much value over using the appropriate
cfgdirectly.
From experience I think this is much easier and less error-prone than cfg peppered everywhere, especially when there is need to be specific like Debian or Fedora. Let me know what you would like me to do.
| progress_bar.set_style( | ||
| ProgressStyle::with_template("{spinner} {msg:.magenta.bold}")? | ||
| // https://github.com/sindresorhus/cli-spinners/blob/master/spinners.json | ||
| .tick_strings(&["∙∙∙", "●∙∙", "∙●∙", "∙∙●", "∙∙∙"]), |
There was a problem hiding this comment.
I see ∙∙∙ is repeated twice here. Is that intentional?
There was a problem hiding this comment.
Yup that's correct as you can see in the link
There was a problem hiding this comment.
I'm not seeing it. All the strings I'm seeing at that link don't have any repeats.
There was a problem hiding this comment.
| let mut command = Command::new("cargo"); | ||
| command.current_dir(&test_dir); | ||
| command.env(NETWORK_ENV_VAR, runner.network_name()); | ||
| command.args(["run", "--quiet", "--", "start"]); |
There was a problem hiding this comment.
As outlined separately, I'm not a big fan of the external process model to run integrations. I'll see what I can do to work up a more integrated alternative.
Edit: Actually, that's not quite right. The external process might be fine here, and may simplify separation of responsibilities in the end. It's the model of each integration test requiring its own crate and executable that I'm opposed to.
Regression Test Results
Run ID: 97c8d4c9-e109-4572-9ff3-adadac228220 Explanation
A regression test is an integrated performance test for
The table below, if present, lists those experiments that have experienced a
statistically significant change in their Changes in
Fine details of change detection per experiment.
|
|
Completed in #15833 |
PoC start of a tooling revamp that will improve E2E environment management of integrations, reduce test times with smart selection, ease onboarding especially for Windows users, etc. similar to https://datadoghq.dev/integrations-core/ddev/cli/
Should eventually replace everything here https://github.com/vectordotdev/vector/tree/master/scripts