Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[engsys] improve smoke test runner #20872

Merged
merged 3 commits into from
Mar 30, 2022

Conversation

jeremymeng
Copy link
Member

@jeremymeng jeremymeng commented Mar 15, 2022

  • previously the runner loads sample modules in main() without any error handling so any error would cause
    main() to exit thus skip the rest of samples. This PR delays sample loading to when they
    are executed so errors in loading modules can be caught and reported.

  • the consumeEventsFromServiceBusQueue sample is calling process.exit() upon
    completion, causing the runner to exit because it currently runs tests in-proc.
    Update the test to exit with code 1 when failing.

Packages impacted by this PR

None

@check-enforcer
Copy link

This pull request is protected by Check Enforcer.

What is Check Enforcer?

Check Enforcer helps ensure all pull requests are covered by at least one check-run (typically an Azure Pipeline). When all check-runs associated with this pull request pass then Check Enforcer itself will pass.

Why am I getting this message?

You are getting this message because Check Enforcer did not detect any check-runs being associated with this pull request within five minutes. This may indicate that your pull request is not covered by any pipelines and so Check Enforcer is correctly blocking the pull request being merged.

What should I do now?

If the check-enforcer check-run is not passing and all other check-runs associated with this PR are passing (excluding license-cla) then you could try telling Check Enforcer to evaluate your pull request again. You can do this by adding a comment to this pull request as follows:
/check-enforcer evaluate
Typically evaulation only takes a few seconds. If you know that your pull request is not covered by a pipeline and this is expected you can override Check Enforcer using the following command:
/check-enforcer override
Note that using the override command triggers alerts so that follow-up investigations can occur (PRs still need to be approved as normal).

What if I am onboarding a new service?

Often, new services do not have validation pipelines associated with them, in order to bootstrap pipelines for a new service, you can issue the following command as a pull request comment:
/azp run prepare-pipelines
This will run a pipeline that analyzes the source tree and creates the pipelines necessary to build and validate your pull request. Once the pipeline has been created you can trigger the pipeline using the following comment:
/azp run js - [service] - ci

@jeremymeng jeremymeng force-pushed the engsys/samples-run-catch-error branch from 34e6c91 to 83ea66d Compare March 15, 2022 18:44
- previously the runner load sample modules in main(). Any error would cause
main() to exit thus skipping rest of tests. Now the samples are loaded when they
are executed so errors in loading modules can be caught.

- the consumeEventsFromServiceBusQueue sample is calling `process.exit()` upon
completion, causing the runner to exit because it currently runs tests in-proc.
Update the test to exit with code 1 when failing.
@jeremymeng jeremymeng force-pushed the engsys/samples-run-catch-error branch from 29a9aae to 6ac6a70 Compare March 17, 2022 00:56
@jeremymeng jeremymeng marked this pull request as ready for review March 17, 2022 19:49
@jeremymeng jeremymeng changed the title [sample runner] catch error and continue next sample [engsys][sample runner] catch error and continue next sample Mar 17, 2022
@jeremymeng jeremymeng changed the title [engsys][sample runner] catch error and continue next sample [engsys] improve smoke test runner Mar 17, 2022
Copy link
Member

@ellismg ellismg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some comments - happy if we want to change the EG samples a little more so they never call process.exit even in the failure case, if that would help in cases like this.

}

main().catch((err) => {
console.error("The sample encountered an error:", err);
process.exit(1);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just wondering - would it be better if we rethrew the error or something to allow the harness to know this sample failed but other scenarios could continue to run?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question! This actually doesn't matter to the test runner as it only invokes the sample's main() function. I was just following the convention of most existing samples.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's possible that process.exit(1) was used at some point? @witemple-msft I wonder whether you have any ideas as you worked with dev-tool and samples before?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is just a conventional pattern for async scripts in JavaScript. Instead of rethrowing the error, you also exit(1). The problem, fundamentally, is that an unhandled Promise rejection does not result in an error status in some (all?) versions of Node, so explicitly calling exit is the only way for the script itself to report that it failed to the shell.


// Bring all samples and includes into memory
for (let entry of manifest) {
console.log(`Importing samples for ${entry.Name}...`);
for (const entry of manifest) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we include a TODO here to start running the ARM samples as well at some point?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wasn't sure about arm- samples. @praveenkuttappan do you know whether we want them to run as part of smoke tests

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ckairen should we exclude ARM samples when generating the run manifest json, or is it also desirable to include them?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it will be beneficial for smoke test

common/smoke-test/run.js Outdated Show resolved Hide resolved
@jeremymeng jeremymeng requested a review from ckairen March 29, 2022 21:49
@jeremymeng jeremymeng merged commit 388fa97 into Azure:main Mar 30, 2022
@jeremymeng jeremymeng deleted the engsys/samples-run-catch-error branch March 30, 2022 17:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants