Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Coverage workflows fail on v22.x/v22.x-staging #55510

Open
richardlau opened this issue Oct 23, 2024 · 11 comments
Open

Coverage workflows fail on v22.x/v22.x-staging #55510

richardlau opened this issue Oct 23, 2024 · 11 comments
Labels
coverage Issues and PRs related to native coverage support. meta Issues and PRs related to the general management of the project. release-agenda Issues and PRs to discuss during the meetings of the Release team. v22.x v22.x Issues that can be reproduced on v22.x or PRs targeting the v22.x-staging branch.

Comments

@richardlau
Copy link
Member

richardlau commented Oct 23, 2024

Starting with Node.js 22.8.0 (78ee90e) the "without intl" coverage workflow started failing due to "does not meet the global threshold".

From Node.js 22.9.0 (4631be0) both coverage workflows (with and without intl) fail due to the same reason (not meeting the global threshold). The situation persists through Node.js 22.10.0 and for the proposal for Node.js 22.11.0. (I'm going to ignore this for Node.js 22.11.0 as the release is intentionally not including changes beyond the metadata updates to mark the release as LTS.)

AFAIK the coverage workflows are passing on Node.js 23 and main and were passing on Node.js 22.7.0 (65eff1e), so the question is what has/hasn't landed on v22.x-staging to cause the discrepancy? If there isn't an easy way to get the workflow passing again, can we lower the threshold for Node.js 22 only, or even disable the coverage workflows for 22?

cc @nodejs/releasers

@RedYetiDev RedYetiDev added meta Issues and PRs related to the general management of the project. coverage Issues and PRs related to native coverage support. v22.x v22.x Issues that can be reproduced on v22.x or PRs targeting the v22.x-staging branch. labels Oct 24, 2024
@RedYetiDev
Copy link
Member

RedYetiDev commented Oct 24, 2024

It's good to note that we are 0.07% away from the threshold, so maybe covering more methods would be a good solution.

@richardlau
Copy link
Member Author

It's good to note that we are 0.07% away from the threshold, so maybe covering more methods would be a good solution.

To be clear, I would not want us to add tests to v22.x-staging to increase the coverage that do not already exist on main (or v23.x), but if we can identify anything missing from v22.x-staging we can start a discussion (e.g. if it's related to a semver major PR not being on v22.x-staging maybe a partial backport is possible).

@richardlau
Copy link
Member Author

Looking in more detail at the failing workflows, several tests are failing error code comparisons:
e.g. https://github.com/nodejs/node/actions/runs/11484562340/job/31962627696?pr=55504#step:8:12150

=== release test-console-group ===
Path: parallel/test-console-group
--- stderr ---
More of level 3
      More of level 3
node:assert:377
      throw err;
      ^

AssertionError [ERR_ASSERTION]: Expected values to be strictly deep-equal:
+ actual - expected

  Comparison {
+   code: [Function: internalBinding],
-   code: 'ERR_INVALID_ARG_TYPE',
    name: 'TypeError'
  }
    at /home/runner/work/node/node/test/parallel/test-console-group.js:197:12
    at Array.forEach (<anonymous>)
    at Object.<anonymous> (/home/runner/work/node/node/test/parallel/test-console-group.js:196:38)
    at Module._compile (node:internal/modules/cjs/loader:1546:14)
    at Object..js (node:internal/modules/cjs/loader:1689:10)
    at Module.load (node:internal/modules/cjs/loader:1318:32)
    at Function._load (node:internal/modules/cjs/loader:1128:12)
    at TracingChannel.traceSync (node:diagnostics_channel:315:14)
    at wrapModuleLoad (node:internal/modules/cjs/loader:218:24)
    at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:170:5) {
  generatedMessage: true,
  code: 'ERR_ASSERTION',
  actual: TypeError [function internalBinding(module) {
      let mod = bindingObj[module];
      if (typeof mod !== 'object') {
        mod = bindingObj[module] = getInternalBinding(module);
        ArrayPrototypePush(moduleLoadList, `Internal Binding ${module}`);
      }
      return mod;
    }]: The "groupIndentation" argument must be of type number. Received null
      at new Console (node:internal/console/constructor:128:5)
      at assert.throws.code (/home/runner/work/node/node/test/parallel/test-console-group.js:199:9)
      at getActual (node:assert:498:5)
      at Function.throws (node:assert:644:24)
      at /home/runner/work/node/node/test/parallel/test-console-group.js:197:12
      at Array.forEach (<anonymous>)
      at Object.<anonymous> (/home/runner/work/node/node/test/parallel/test-console-group.js:196:38)
      at Module._compile (node:internal/modules/cjs/loader:1546:14)
      at Object..js (node:internal/modules/cjs/loader:1689:10)
      at Module.load (node:internal/modules/cjs/loader:1318:32) {
    code: [Function: internalBinding]
  },
  expected: { code: 'ERR_INVALID_ARG_TYPE', name: 'TypeError' },
  operator: 'throws'
}

Node.js v22.11.0
--- stdout ---
This is the outer level
  Level 2
    Level 3
  Back to level 2
Back to the outer level
Still at the outer level
No indentation
None here either
  Now the first console is indenting
But the second one does not
This is a label
  And this is the data for that label
Label
  Level 2
    Level 3
not indented
  indented
  also indented
  {
    also: 'a',
    multiline: 'object',
    should: 'be',
    indented: 'properly',
    kthx: 'bai'
  }
Set the groupIndentation parameter to 3
This is the outer level
   Level 2
      Level 3
   Back to level 2
Back to the outer level
Still at the outer level
Command: out/Release/node --test-reporter=spec /home/runner/work/node/node/test/parallel/test-console-group.js

I can reproduce if building with --coverage and running:

$ NO_COLOR=1 out/Release/node --test-reporter=spec test/parallel/test-console-group.js
This is the outer level
  Level 2
    Level 3
    More of level 3
  Back to level 2
Back to the outer level
Still at the outer level
No indentation
None here either
  Now the first console is indenting
But the second one does not
This is a label
  And this is the data for that label
Label
  Level 2
    Level 3
not indented
  indented
  also indented
  {
    also: 'a',
    multiline: 'object',
    should: 'be',
    indented: 'properly',
    kthx: 'bai'
  }
Set the groupIndentation parameter to 3
This is the outer level
   Level 2
      Level 3
      More of level 3
   Back to level 2
Back to the outer level
Still at the outer level
node:assert:377
      throw err;
      ^

AssertionError [ERR_ASSERTION]: Expected values to be strictly deep-equal:
+ actual - expected

  Comparison {
+   code: [Function: internalBinding],
-   code: 'ERR_INVALID_ARG_TYPE',
    name: 'TypeError'
  }
    at /home/rlau/sandbox/github/trees/v22.x/test/parallel/test-console-group.js:197:12
    at Array.forEach (<anonymous>)
    at Object.<anonymous> (/home/rlau/sandbox/github/trees/v22.x/test/parallel/test-console-group.js:196:38)
    at Module._compile (node:internal/modules/cjs/loader:1546:14)
    at Object..js (node:internal/modules/cjs/loader:1689:10)
    at Module.load (node:internal/modules/cjs/loader:1318:32)
    at Function._load (node:internal/modules/cjs/loader:1128:12)
    at TracingChannel.traceSync (node:diagnostics_channel:315:14)
    at wrapModuleLoad (node:internal/modules/cjs/loader:218:24)
    at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:170:5) {
  generatedMessage: true,
  code: 'ERR_ASSERTION',
  actual: TypeError [function internalBinding(module) {
      let mod = bindingObj[module];
      if (typeof mod !== 'object') {
        mod = bindingObj[module] = getInternalBinding(module);
        ArrayPrototypePush(moduleLoadList, `Internal Binding ${module}`);
      }
      return mod;
    }]: The "groupIndentation" argument must be of type number. Received null
      at new Console (node:internal/console/constructor:128:5)
      at assert.throws.code (/home/rlau/sandbox/github/trees/v22.x/test/parallel/test-console-group.js:199:9)
      at getActual (node:assert:498:5)
      at Function.throws (node:assert:644:24)
      at /home/rlau/sandbox/github/trees/v22.x/test/parallel/test-console-group.js:197:12
      at Array.forEach (<anonymous>)
      at Object.<anonymous> (/home/rlau/sandbox/github/trees/v22.x/test/parallel/test-console-group.js:196:38)
      at Module._compile (node:internal/modules/cjs/loader:1546:14)
      at Object..js (node:internal/modules/cjs/loader:1689:10)
      at Module.load (node:internal/modules/cjs/loader:1318:32) {
    code: [Function: internalBinding]
  },
  expected: { code: 'ERR_INVALID_ARG_TYPE', name: 'TypeError' },
  operator: 'throws'
}

Node.js v22.11.0
$

Confirmed this test passes (configured with --coverage) with v22.7.0. Bisecting points to 948bba3.

$ git bisect log
git bisect start
# status: waiting for both good and bad commits
# good: [65eff1eb19a6d8e17435bbc4147ac4535e81abb4] 2024-08-22, Version 22.7.0 (Current)
git bisect good 65eff1eb19a6d8e17435bbc4147ac4535e81abb4
# status: waiting for bad commit, 1 good commit known
# bad: [4631be0311fbde7b77723757e15d025727399e63] 2024-09-17, Version 22.9.0 (Current)
git bisect bad 4631be0311fbde7b77723757e15d025727399e63
# good: [85b5ed5d412c3ba941581f5dd3aebbdb90f3f36a] buffer: re-enable Fast API for Buffer.write
git bisect good 85b5ed5d412c3ba941581f5dd3aebbdb90f3f36a
# bad: [56aca2a1ca71746575bf6910523f48db9d36f269] doc: run license-builder
git bisect bad 56aca2a1ca71746575bf6910523f48db9d36f269
# good: [b8ae36b6c3bcbf560310ce9d0405ebd8d6768a6e] doc: fix typo in recognizing-contributors
git bisect good b8ae36b6c3bcbf560310ce9d0405ebd8d6768a6e
# good: [56238debff8b63cda4618b351b91d4983fecbc28] test: set `test-runner-run-watch` as flaky
git bisect good 56238debff8b63cda4618b351b91d4983fecbc28
# good: [608a61113236d715f1f1fc345154017a50d5de40] tools: add readability/fn_size to filter
git bisect good 608a61113236d715f1f1fc345154017a50d5de40
# bad: [61047dd13081b95579900fd0607e7d053c73bbfb] deps: upgrade npm to 10.8.3
git bisect bad 61047dd13081b95579900fd0607e7d053c73bbfb
# bad: [948bba396cb4f73f2962d04e53465247b3b1f356] build: do not build with code cache for core coverage collection
git bisect bad 948bba396cb4f73f2962d04e53465247b3b1f356
# good: [97cbcfbb437e075106f63b8156c9999151c052c1] src: fix unhandled error in structuredClone
git bisect good 97cbcfbb437e075106f63b8156c9999151c052c1
# first bad commit: [948bba396cb4f73f2962d04e53465247b3b1f356] build: do not build with code cache for core coverage collection
$

I'll open a revert PR to check if that restores the coverage, and if it does we can discuss what to do from there.

@richardlau
Copy link
Member Author

I'll open a revert PR to check if that restores the coverage, and if it does we can discuss what to do from there.

#55522 showed the coverage still failed to meet the threshold, so presumably it was an earlier change than 948bba3.

@joyeecheung I don't know if we still need to investigate the test failures due to error codes not matching when coverage is enabled after 948bba3 landed on v22.x-staging.

@joyeecheung
Copy link
Member

Does enabling code cache again fix the error code mismatches? I does seem rather weird though it if has to rely on the code cache to pass....

@richardlau
Copy link
Member Author

Does enabling code cache again fix the error code mismatches? I does seem rather weird though it if has to rely on the code cache to pass....

Apparently so. https://github.com/nodejs/node/actions/runs/11505175998/job/32026308742?pr=55522#step:8:5381 (from #55522) shows

=== 2 tests failed
=== 1 tests CRASHED

whereas the same workflow for 22.11.0, https://github.com/nodejs/node/actions/runs/11484562340/job/31962627696?pr=55504#step:8:61762, shows

=== 729 tests failed
=== 2 tests CRASHED

I'm equally surprised/puzzled.

@joyeecheung
Copy link
Member

It might be worth taking a look at the failure, though test failures in the coverage build doesn't fail the CI IIUC so we don't really need to worry about it. The coverage threshold ideally should be meet again, though we could always just lower it - I think the threshold was only set to remind us about regression. But if it takes too much effort to fix for a release branch and it's not lowered too far, might as well just loosen it a bit.

@targos
Copy link
Member

targos commented Oct 26, 2024

There's the same issue with v20.x

@richardlau richardlau added the release-agenda Issues and PRs to discuss during the meetings of the Release team. label Oct 26, 2024
@richardlau
Copy link
Member Author

There's the same issue with v20.x

Is that referring to the coverage failing to meet the threshold, or the error code mismatches?

@targos
Copy link
Member

targos commented Oct 26, 2024

Coverage threshold

@richardlau
Copy link
Member Author

I wonder if this is related to #51251.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
coverage Issues and PRs related to native coverage support. meta Issues and PRs related to the general management of the project. release-agenda Issues and PRs to discuss during the meetings of the Release team. v22.x v22.x Issues that can be reproduced on v22.x or PRs targeting the v22.x-staging branch.
Projects
None yet
Development

No branches or pull requests

4 participants