Updated node test reporter with node v24 support#6804
Conversation
|
48d7738 to
4093d9c
Compare
kanej
left a comment
There was a problem hiding this comment.
This looks good for supporting multiple node versions in our node test reporter suite.
I think we should separate out the question of reporting of subtests.
Yes, totally! I just wanted to point out that it doesn't look like a regression between v22 and v24 after all. |
Resolves #6792
In this PR, I added expected result file versioning with the node major version to the selected note test reporter integration tests. After these changes are introduced, the
nested-testfixture will have separate expected result snapshots for node v22 and v24. This will enable us to continue testing the reporter with both versions in CI.For now, the versioning of result files for the
nested-testis hardcoded. If we find more tests that require this in the future, we should either allow the tests to self-identify as such or simply version all the result files.The other thing from #6681 (comment) that this PR was supposed to address was how the subtest failures are reported, but I'm not sure we should address this.
This is how they are being displayed in node v22 - https://raw.githubusercontent.com/NomicFoundation/hardhat/4093d9c10b59b8086824330900b25d0616369ef1/v-next/hardhat-node-test-reporter/integration-tests/fixture-tests/nested-test/result.v22.svg
And this is how they are being displayed in node v24 - https://raw.githubusercontent.com/NomicFoundation/hardhat/4093d9c10b59b8086824330900b25d0616369ef1/v-next/hardhat-node-test-reporter/integration-tests/fixture-tests/nested-test/result.v24.svg
Apart from fewer stack trace frames and the lack of a cancelled unawaited test, the outputs are very similar.
The question is whether we should report failure details 5), 6) and 7) at all. I'd argue yes because those failures are for failing
its rather thandescribes (seehardhat/v-next/hardhat-node-test-reporter/integration-tests/fixture-tests/nested-test/test.ts
Line 21 in 4093d9c
itrepresents a test case, we should have an indication of it failing when one of its "subtests" fails. The reason why we report on these failures after reporting on the subtests is that we display the results as soon as we get them.