enhancement(datadog service): Add datadog global config options#18929
enhancement(datadog service): Add datadog global config options#18929StephenWakely merged 31 commits intomasterfrom
Conversation
Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>
Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>
Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>
Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>
Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>
✅ Deploy Preview for vector-project ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
✅ Deploy Preview for vrl-playground ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>
Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>
Datadog ReportBranch report: ✅ |
|
Also note that I created #18940 to clean up some of the existing config options. |
Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>
Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>
Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>
Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>
Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>
buraizu
left a comment
There was a problem hiding this comment.
Suggesting a couple of edits to the new content
bruceg
left a comment
There was a problem hiding this comment.
A lot of quibbling over naming and suggestions for shared helpers in my comments, but the approach seems reasonable. In general, I would like to see two things:
- Keep the implementation details of the context object (i.e. the use of an
anymap) within the context object type wrapper. - Rename "extra" to something more specific. That is, the term "context" already implies it informs something else, so it is already "extra". Maybe it is app context given where it is first found?
Feel free to argue why it should stay the same, though.
src/extra_context.rs
Outdated
| #[derive(Clone)] | ||
| pub struct ExtraContext { | ||
| context: Arc<Map<dyn Any + Send + Sync>>, | ||
| } |
There was a problem hiding this comment.
This looks like more of a newtype pattern than a structure of values, so I would suggest phrasing it as such:
| #[derive(Clone)] | |
| pub struct ExtraContext { | |
| context: Arc<Map<dyn Any + Send + Sync>>, | |
| } | |
| pub struct ExtraContext(Arc<Map<dyn Any + Send + Sync>>); |
I'm also not excited about the type (shared any map) being named for its usage point (extra app context), but this is all more bikeshedding than anything.
Having said that, I think the choice of an anymap here is a great way of allowing this to be extensible without adding generics to any of the types.
There was a problem hiding this comment.
I'm also not excited about the type (shared any map) being named for its usage point (extra app context), but this is all more bikeshedding than anything.
I'm not quite sure what you mean.. Can you expound?
There was a problem hiding this comment.
The type is named ExtraContext. The whole name of the type is an indication of where it is used rather than what it contains. That is, this shared any-map is used as a context field that is extraneous to other application data. The type itself is not limited to being "extra context" and that term describes little about what it actually is and what range of operations can be done on it.
Anyways, like I said, more bikeshedding than anything on this point.
src/sinks/datadog/logs/tests.rs
Outdated
| let mut context = anymap::Map::new(); | ||
| context.insert(datadog::Options { | ||
| api_key: Some("global-key".to_string().into()), | ||
| ..Default::default() | ||
| }); | ||
| let cx = SinkContext { | ||
| extra_context: ExtraContext::new(context), |
There was a problem hiding this comment.
This pattern seems awkward, and seems a good motivator to have the type wrapper do something like:
| let mut context = anymap::Map::new(); | |
| context.insert(datadog::Options { | |
| api_key: Some("global-key".to_string().into()), | |
| ..Default::default() | |
| }); | |
| let cx = SinkContext { | |
| extra_context: ExtraContext::new(context), | |
| let extra_context = ExtraContext::single_value(datadog::Options { | |
| api_key: Some("global-key".to_string().into()), | |
| ..Default::default() | |
| }); | |
| let cx = SinkContext { | |
| extra_context, |
That is, it would be good to avoid seeing the anymap type outside of the context object wrapper.
There was a problem hiding this comment.
@StephenWakely, I think you still need to update the above now that you've implemented ExtraContext::single_value 🙂
| #[derive(Clone, Debug, Default)] | ||
| #[serde(deny_unknown_fields)] | ||
| pub struct DatadogCommonConfig { | ||
| pub struct LocalDatadogCommonConfig { |
There was a problem hiding this comment.
More bikeshedding, I find it ironic that the structure that is shared across multiple components is now called "local". Since the other DatadogCommonConfig is not something that is "configured" the way this one is, maybe it could be DatadogCommonSettings or even just DatadogCommon.
There was a problem hiding this comment.
I do still prefer LocalDatadogCommonConfig because we apply the global options to create the DatadogCommonConfig - implying the combination of Local and Global to create the definitive.. If not Local then perhaps something else closely related?
Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>
Datadog ReportBranch report: ✅ |
dsmith3197
left a comment
There was a problem hiding this comment.
Thanks for adding the extra tests! This looks good to me. Left some trivial comments / suggested changes.
src/sinks/datadog/logs/tests.rs
Outdated
| let mut context = anymap::Map::new(); | ||
| context.insert(datadog::Options { | ||
| api_key: Some("global-key".to_string().into()), | ||
| ..Default::default() | ||
| }); | ||
| let cx = SinkContext { | ||
| extra_context: ExtraContext::new(context), |
There was a problem hiding this comment.
@StephenWakely, I think you still need to update the above now that you've implemented ExtraContext::single_value 🙂
Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>
Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>
Regression Detector ResultsRun ID: 445b2d22-ae6c-4804-aea7-127965eca4d0 ExplanationA regression test is an integrated performance test for Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval. We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:
The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed. No interesting changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%. Fine details of change detection per experiment.
|
Regression Detector ResultsRun ID: a148dbda-5dc6-445e-b070-d1d96872fc32 ExplanationA regression test is an integrated performance test for Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval. We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:
The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed. No interesting changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%. Fine details of change detection per experiment.
|
Regression Detector ResultsRun ID: 25d3e9aa-b85b-4c79-818d-0baabd5f7878 ExplanationA regression test is an integrated performance test for Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval. We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:
The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed. Changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%:
Fine details of change detection per experiment.
|
Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>
Regression Detector ResultsRun ID: 9d9951ce-ba7c-4619-a2ab-7229c4bdc93e Explanation A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI". For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
No interesting changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%. Experiments that were not declared erratic but were detected as being so, coefficient of variation cutoff 0.10:
Fine details of change detection per experiment.
|
Regression Detector ResultsRun ID: 632b1d0d-38e2-49e5-a747-527e64df2bec Explanation A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI". For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
Changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%:
Experiments that were not declared erratic but were detected as being so, coefficient of variation cutoff 0.10:
Fine details of change detection per experiment.
|
Regression Detector ResultsRun ID: 7328e4d1-c4c4-451a-91f6-582dc993bd8d Explanation A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI". For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
Changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%:
Experiments that were not declared erratic but were detected as being so, coefficient of variation cutoff 0.10:
Fine details of change detection per experiment.
|
Regression Detector ResultsRun ID: a7fa182a-ea6b-4f21-a36d-68308dc9add2 Explanation A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI". For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
No interesting changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%. Experiments that were not declared erratic but were detected as being so, coefficient of variation cutoff 0.10:
Fine details of change detection per experiment.
|
…ordotdev#18929) * Add datadog section Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com> * Send global datadog options to sinks Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com> * Fix tests Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com> * Added a test Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com> * Update docs Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com> * Remove source feature Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com> * Test local setting can override the global one Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com> * Spelling Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com> * Update docs Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com> * Feedback from Doug Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com> * Allow datadog options to be specified by passing in the RootOpts Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com> * Pass datadog options through a new extra_context parameter Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com> * Fix extra context Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com> * Tidy up a little Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com> * Make Datadog options pub Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com> * Updated licenses Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com> * Add anymap to spellings Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com> * Add to upgrade guide Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com> * Feedback from Doug Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com> * Spelling Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com> * Feedback from Bryce Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com> * Component docs Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com> * Feedback from Bruce Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com> * Use single_value in tests Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com> * Rename dd_common to local_dd_common Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com> * Pass default for extra context when running as a Windows service Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com> --------- Signed-off-by: Stephen Wakely <fungus.humungus@gmail.com>
OPW-43
This adds a global Datadog section to the configuration which then applies as defaults to the Datadog logs, metrics and traces sink.