Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handling of cross-origin request network timings #3199

Open
t2t2 opened this issue Aug 25, 2022 · 2 comments · May be fixed by #5332
Open

Handling of cross-origin request network timings #3199

t2t2 opened this issue Aug 25, 2022 · 2 comments · May be fixed by #5332
Assignees
Labels
bug Something isn't working priority:p2 Bugs and spec inconsistencies which cause telemetry to be incomplete or incorrect

Comments

@t2t2
Copy link
Contributor

t2t2 commented Aug 25, 2022

What happened?

Steps to Reproduce

When a cross-origin request is done (eg. example.com -> resources.vendor.com, but also example.com -> static.example.com or even localhost:3000(a webpack server) -> localhost:8080 (API server)), the PerformanceResourceTiming generated for these doesn't include most of the values by default

Note: When CORS is in effect, many of these values are returned as zero unless the server's access policy permits these values to be shared. This requires the server providing the resource to send the Timing-Allow-Origin HTTP response header with a value specifying the origin or origins which are allowed to get the restricted timestamp values.

The properties which are returned as 0 by default when loading a resource from a domain other than the one of the web page itself: redirectStart, redirectEnd, domainLookupStart, domainLookupEnd, connectStart, connectEnd, secureConnectionStart, requestStart, and responseStart.

MDN

Currently otel/sdk-trace-web addSpanNetworkEvents doesn't have any checks for this, which means 0 values get passed to span.addEvent, where timeorigin (time at which page loaded) gets added, causing times that look normal but are usually before span start:

documentLoad
span start time:       1661419670581.9
fetchStart:            1661419670581.9
unloadEventStart:      1661419670378.3
unloadEventEnd:        1661419670378.3
domInteractive:        1661419671104.4

resourceFetch  (to another subdomain)
span start:            1661419671106.4

fetchStart:            1661419671106.4
domainLookupStart:     1661419670378.3
domainLookupEnd:       1661419670378.3
connectStart:          1661419670378.3
secureConnectionStart: 1661419670378.3
connectEnd:            1661419670378.3
requestStart:          1661419670378.3
responseStart:         1661419670378.3
responseEnd:           1661419671153.8

This sort of data isn't easy to filter out when processing is done on span level (there's no info of what is 0 value in browser), meaning simplest way to generate the request phases chart (eg. responseEnd - responseStart to calculate response duration) generating longer timings than makes sense:

image

Expected Behavior

Don't include values that are known to be unavailable in this case, so analysis tools could more easily handle this case differently (This might be considered a breaking change that people would be against? so I decided to first open this issue to suggest this instead of going straight to PR)

If there's too much resistance for changing current behaviour this could also be something speced out during RUM SIG and fix implemented when instrumentation follows those specs (kinda touching #3174), but as user of current instrumentation in signalfx/splunk would rather fix the current situation

@t2t2 t2t2 added bug Something isn't working triage labels Aug 25, 2022
@scheler
Copy link
Contributor

scheler commented Aug 26, 2022

This looks helpful i support this change. Is there any process currently for introducing breaking changes? For example, this change could be based on an InstrumentationConfig option and you could give notice to folks via changeling/ release notes of the npm package.

@legendecas
Copy link
Member

legendecas commented Aug 26, 2022

Is there any process currently for introducing breaking changes?

I don't think there is any, as long as we have a good reason to make the breaking change. I believe if the current 0-valued events are meaningless in CORS requests, it should be ok for us to not generate these events -- they are not working as expected anyways.

@dyladan dyladan added priority:p2 Bugs and spec inconsistencies which cause telemetry to be incomplete or incorrect and removed triage labels Aug 31, 2022
chancancode added a commit to tildeio/opentelemetry-js that referenced this issue Jan 11, 2025
Background:

1. For historical reasons, the perf/resource timing spec uses 0 as a
   special value to denote when some timing information is either
   not applicable (e.g. no redirect occured) or not available (when
   producing an opaque resource timing object for CORS resources).

2. However, in some limited cases, 0 can also be a legitimate value
   for these timing events. Note that this is rare in real life –
   these fields are high-res performance timestamp relative to the
   performance time-origin, which is typically the navigation event
   for the initial page load.

   For a _resource_ timing to have a 0 timestamp, it would have to
   be initiated simultaneously with page load, it is unclear if this
   can actually happen IRL. Reportedly (open-telemetry#1769), at one point this
   was observed in some browsers during automated tests where things
   happen very fast and the browser artifically supress the timing
   resolution. It was unclear if the report was about the navigation
   timing entry or resource timing entries.

   It is also unclear if these utilities are intended for anything
   other than the internal fetch/XHR instrumentation, but they are
   public API, so if someone uses these functions on the initial
   page navigation event, then it is possible for the function to
   see legitimate 0-value inputs.

3. When creating span events, we do not use the timeOrigin-relative
   perf timestamps as-is. Rather, we convert them back to the UNIX
   epoch timestamps. When this conversion is applied to those 0
   timestamps, it creates nonsensical/misleading events that are
   quite difficult to distinguish for downstream consumers.

It would be nice if the W3C specs would have left the N/A values as
`undefined`, but that's not the world we live in and so we have to
work with what we've got.

History:

1. Initially, the code ignored 0-value timestamps.
2. open-telemetry#1769 identified cases of valid 0-value timestamps and removed
   the check.
3. This causes the other category of bugs where we created the
   nonsensical events (open-telemetry#2457, open-telemetry#3848, open-telemetry#4478)
4. open-telemetry#3897 added a special-case for `secureConnectionStart` by way
   of tapping into auxiliary metadata
5. That appoach cannot be generalized for the other cases, so
   open-telemetry#4486 added some rather convoluted checks.
6. As part of refactoring the tests to use service workers, a new
   bug open-telemetry#5314 was identified.

Presumably, the patch in open-telemetry#4486 was writen that way (as opposed to
just checking for 0) to avoid breaking open-telemetry#1769, but I suspect it ends
up breaking some of those use cases anyway.

Options:

1. Keep the patch from open-telemetry#4486 largely as-is but change the reference
   time from `fetchStart` to `startTime`.
2. Revert to the very original intent of the code and drop everything
   with 0-values.
3. This commit attempts a compromise position, when `startTime === 0`
   (which probably means we are called with the initial navigation
   event), 0-values are plausibly valid so we include them by default,
   but in any other cases (`startTime > 0`), legitimate 0-values
   should not be possible so we drop them by default.

Part of the issue here is that it's unclear how these utilities are
being used since they are public APIs. In core, these functions are
only called by the fetch/xhr instrumentation with resource timing
entries, where legitimate 0-value seems extremely unlikely.

In my opinion, the rarity of legitimate 0-value timing values in the
real world (recall that it doesn't just mean something happened very
fast, but something happened very fast _at the same instant that the
page loaded_) should yield to the conflicting interst (to avoid
non-sensical events that are difficult to process downstream) here,
especially when it appears that the only observed cases are from
automated testsing.

Personally I would be in favor of the stronger position (2), but
attempted to strike a balance here to keep things useful the other
cases.

In open-telemetry#2457, @johnbley argued:

> I think that for this piece of tech, our telemetry should report
> whatever the browser says with minimal processing, and let
> downstream/backend/easier-to-modify components deal with
> normalization/truncation/processing if desired (e.g., throwing
> away absurd times like "that page took 6 years or -5 hours to
> load", or declaring that "no redirects happened so the redirect
> timing is not 0 but non-existent"). Does that not work in your
> world for some reason?

I could see this perspective if the "minimal processing" means that
we sends the 0-values to the backends as 0s. It's still unfortunate
but at least it would be easy to write a query/filter to process
them away.

However, given we are actively normalizes the 0-values into the
absolute epoch timestamp of `performance.timeOrigin`, I think this
becomes actively hostile to backend/downstream processing, as it
turns a relatively clear signal (value=0) into something that would
require stateful machinary to reverse engineer (infer the timeOrigin
from a trace), or imperfect huristics (ignore things that seems to
take way too long).

It also takes up bytesize/bandwidth to transmit in the vast majority
of cases where it would be actively unhelpful.

Instead, I'd argue it creates way less harm to drop the 0-values.
When a 0-value is expected but not sent, it may skew the distrubtion
on some aggregrate metrics slightly (e.g. making "average time for
SSL handshake" appears large), but probably nothing would break. If
we are expecting backends to do work to normalize data anyway, then
IMO it would also be way easier for them to notice the missing items
and insert it back than the other way around.

Fixes open-telemetry#3199
Fixes open-telemetry#5314

See also open-telemetry#1769, open-telemetry#2457, open-telemetry#3848, open-telemetry#3879, open-telemetry#4478, open-telemetry#4486
chancancode added a commit to tildeio/opentelemetry-js that referenced this issue Jan 11, 2025
Background:

1. For historical reasons, the perf/resource timing spec uses 0 as a
   special value to denote when some timing information is either
   not applicable (e.g. no redirect occured) or not available (when
   producing an opaque resource timing object for CORS resources).

2. However, in some limited cases, 0 can also be a legitimate value
   for these timing events. Note that this is rare in real life –
   these fields are high-res performance timestamp relative to the
   performance time-origin, which is typically the navigation event
   for the initial page load.

   For a _resource_ timing to have a 0 timestamp, it would have to
   be initiated simultaneously with page load, it is unclear if this
   can actually happen IRL. Reportedly (open-telemetry#1769), at one point this
   was observed in some browsers during automated tests where things
   happen very fast and the browser artifically supress the timing
   resolution. It was unclear if the report was about the navigation
   timing entry or resource timing entries.

   It is also unclear if these utilities are intended for anything
   other than the internal fetch/XHR instrumentation, but they are
   public API, so if someone uses these functions on the initial
   page navigation event, then it is possible for the function to
   see legitimate 0-value inputs.

3. When creating span events, we do not use the timeOrigin-relative
   perf timestamps as-is. Rather, we convert them back to the UNIX
   epoch timestamps. When this conversion is applied to those 0
   timestamps, it creates nonsensical/misleading events that are
   quite difficult to distinguish for downstream consumers.

It would be nice if the W3C specs would have left the N/A values as
`undefined`, but that's not the world we live in and so we have to
work with what we've got.

History:

1. Initially, the code ignored 0-value timestamps.
2. open-telemetry#1769 identified cases of valid 0-value timestamps and removed
   the check.
3. This causes the other category of bugs where we created the
   nonsensical events (open-telemetry#2457, open-telemetry#3848, open-telemetry#4478)
4. open-telemetry#3897 added a special-case for `secureConnectionStart` by way
   of tapping into auxiliary metadata
5. That appoach cannot be generalized for the other cases, so
   open-telemetry#4486 added some rather convoluted checks.
6. As part of refactoring the tests to use service workers, a new
   bug open-telemetry#5314 was identified.

Presumably, the patch in open-telemetry#4486 was writen that way (as opposed to
just checking for 0) to avoid breaking open-telemetry#1769, but I suspect it ends
up breaking some of those use cases anyway.

Options:

1. Keep the patch from open-telemetry#4486 largely as-is but change the reference
   time from `fetchStart` to `startTime`.
2. Revert to the very original intent of the code and drop everything
   with 0-values.
3. This commit attempts a compromise position, when `startTime === 0`
   (which probably means we are called with the initial navigation
   event), 0-values are plausibly valid so we include them by default,
   but in any other cases (`startTime > 0`), legitimate 0-values
   should not be possible so we drop them by default.

Part of the issue here is that it's unclear how these utilities are
being used since they are public APIs. In core, these functions are
only called by the fetch/xhr instrumentation with resource timing
entries, where legitimate 0-value seems extremely unlikely.

In my opinion, the rarity of legitimate 0-value timing values in the
real world (recall that it doesn't just mean something happened very
fast, but something happened very fast _at the same instant that the
page loaded_) should yield to the conflicting interst (to avoid
non-sensical events that are difficult to process downstream) here,
especially when it appears that the only observed cases are from
automated testsing.

Personally I would be in favor of the stronger position (2), but
attempted to strike a balance here to keep things useful the other
cases.

In open-telemetry#2457, @johnbley argued:

> I think that for this piece of tech, our telemetry should report
> whatever the browser says with minimal processing, and let
> downstream/backend/easier-to-modify components deal with
> normalization/truncation/processing if desired (e.g., throwing
> away absurd times like "that page took 6 years or -5 hours to
> load", or declaring that "no redirects happened so the redirect
> timing is not 0 but non-existent"). Does that not work in your
> world for some reason?

I could see this perspective if the "minimal processing" means that
we sends the 0-values to the backends as 0s. It's still unfortunate
but at least it would be easy to write a query/filter to process
them away.

However, given we are actively normalizes the 0-values into the
absolute epoch timestamp of `performance.timeOrigin`, I think this
becomes actively hostile to backend/downstream processing, as it
turns a relatively clear signal (value=0) into something that would
require stateful machinary to reverse engineer (infer the timeOrigin
from a trace), or imperfect huristics (ignore things that seems to
take way too long).

It also takes up bytesize/bandwidth to transmit in the vast majority
of cases where it would be actively unhelpful.

Instead, I'd argue it creates way less harm to drop the 0-values.
When a 0-value is expected but not sent, it may skew the distrubtion
on some aggregrate metrics slightly (e.g. making "average time for
SSL handshake" appears large), but probably nothing would break. If
we are expecting backends to do work to normalize data anyway, then
IMO it would also be way easier for them to notice the missing items
and insert it back than the other way around.

Fixes open-telemetry#3199
Fixes open-telemetry#5314

See also open-telemetry#1769, open-telemetry#2457, open-telemetry#3848, open-telemetry#3879, open-telemetry#4478, open-telemetry#4486
chancancode added a commit to tildeio/opentelemetry-js that referenced this issue Jan 14, 2025
Background:

1. For historical reasons, the perf/resource timing spec uses 0 as a
   special value to denote when some timing information is either
   not applicable (e.g. no redirect occured) or not available (when
   producing an opaque resource timing object for CORS resources).

2. However, in some limited cases, 0 can also be a legitimate value
   for these timing events. Note that this is rare in real life –
   these fields are high-res performance timestamp relative to the
   performance time-origin, which is typically the navigation event
   for the initial page load.

   For a _resource_ timing to have a 0 timestamp, it would have to
   be initiated simultaneously with page load, it is unclear if this
   can actually happen IRL. Reportedly (open-telemetry#1769), at one point this
   was observed in some browsers during automated tests where things
   happen very fast and the browser artifically supress the timing
   resolution. It was unclear if the report was about the navigation
   timing entry or resource timing entries.

   It is also unclear if these utilities are intended for anything
   other than the internal fetch/XHR instrumentation, but they are
   public API, so if someone uses these functions on the initial
   page navigation event, then it is possible for the function to
   see legitimate 0-value inputs.

3. When creating span events, we do not use the timeOrigin-relative
   perf timestamps as-is. Rather, we convert them back to the UNIX
   epoch timestamps. When this conversion is applied to those 0
   timestamps, it creates nonsensical/misleading events that are
   quite difficult to distinguish for downstream consumers.

It would be nice if the W3C specs would have left the N/A values as
`undefined`, but that's not the world we live in and so we have to
work with what we've got.

History:

1. Initially, the code ignored 0-value timestamps.
2. open-telemetry#1769 identified cases of valid 0-value timestamps and removed
   the check.
3. This causes the other category of bugs where we created the
   nonsensical events (open-telemetry#2457, open-telemetry#3848, open-telemetry#4478)
4. open-telemetry#3897 added a special-case for `secureConnectionStart` by way
   of tapping into auxiliary metadata
5. That appoach cannot be generalized for the other cases, so
   open-telemetry#4486 added some rather convoluted checks.
6. As part of refactoring the tests to use service workers, a new
   bug open-telemetry#5314 was identified.

Presumably, the patch in open-telemetry#4486 was writen that way (as opposed to
just checking for 0) to avoid breaking open-telemetry#1769, but I suspect it ends
up breaking some of those use cases anyway.

Options:

1. Keep the patch from open-telemetry#4486 largely as-is but change the reference
   time from `fetchStart` to `startTime`.
2. Revert to the very original intent of the code and drop everything
   with 0-values.
3. This commit attempts a compromise position, when `startTime === 0`
   (which probably means we are called with the initial navigation
   event), 0-values are plausibly valid so we include them by default,
   but in any other cases (`startTime > 0`), legitimate 0-values
   should not be possible so we drop them by default.

Part of the issue here is that it's unclear how these utilities are
being used since they are public APIs. In core, these functions are
only called by the fetch/xhr instrumentation with resource timing
entries, where legitimate 0-value seems extremely unlikely.

In my opinion, the rarity of legitimate 0-value timing values in the
real world (recall that it doesn't just mean something happened very
fast, but something happened very fast _at the same instant that the
page loaded_) should yield to the conflicting interst (to avoid
non-sensical events that are difficult to process downstream) here,
especially when it appears that the only observed cases are from
automated testsing.

Personally I would be in favor of the stronger position (2), but
attempted to strike a balance here to keep things useful the other
cases.

In open-telemetry#2457, @johnbley argued:

> I think that for this piece of tech, our telemetry should report
> whatever the browser says with minimal processing, and let
> downstream/backend/easier-to-modify components deal with
> normalization/truncation/processing if desired (e.g., throwing
> away absurd times like "that page took 6 years or -5 hours to
> load", or declaring that "no redirects happened so the redirect
> timing is not 0 but non-existent"). Does that not work in your
> world for some reason?

I could see this perspective if the "minimal processing" means that
we sends the 0-values to the backends as 0s. It's still unfortunate
but at least it would be easy to write a query/filter to process
them away.

However, given we are actively normalizes the 0-values into the
absolute epoch timestamp of `performance.timeOrigin`, I think this
becomes actively hostile to backend/downstream processing, as it
turns a relatively clear signal (value=0) into something that would
require stateful machinary to reverse engineer (infer the timeOrigin
from a trace), or imperfect huristics (ignore things that seems to
take way too long).

It also takes up bytesize/bandwidth to transmit in the vast majority
of cases where it would be actively unhelpful.

Instead, I'd argue it creates way less harm to drop the 0-values.
When a 0-value is expected but not sent, it may skew the distrubtion
on some aggregrate metrics slightly (e.g. making "average time for
SSL handshake" appears large), but probably nothing would break. If
we are expecting backends to do work to normalize data anyway, then
IMO it would also be way easier for them to notice the missing items
and insert it back than the other way around.

Fixes open-telemetry#3199
Fixes open-telemetry#5314

See also open-telemetry#1769, open-telemetry#2457, open-telemetry#3848, open-telemetry#3879, open-telemetry#4478, open-telemetry#4486
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working priority:p2 Bugs and spec inconsistencies which cause telemetry to be incomplete or incorrect
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants