-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[engsys] improve smoke test runner #20872
[engsys] improve smoke test runner #20872
Conversation
This pull request is protected by Check Enforcer. What is Check Enforcer?Check Enforcer helps ensure all pull requests are covered by at least one check-run (typically an Azure Pipeline). When all check-runs associated with this pull request pass then Check Enforcer itself will pass. Why am I getting this message?You are getting this message because Check Enforcer did not detect any check-runs being associated with this pull request within five minutes. This may indicate that your pull request is not covered by any pipelines and so Check Enforcer is correctly blocking the pull request being merged. What should I do now?If the check-enforcer check-run is not passing and all other check-runs associated with this PR are passing (excluding license-cla) then you could try telling Check Enforcer to evaluate your pull request again. You can do this by adding a comment to this pull request as follows: What if I am onboarding a new service?Often, new services do not have validation pipelines associated with them, in order to bootstrap pipelines for a new service, you can issue the following command as a pull request comment: |
34e6c91
to
83ea66d
Compare
- previously the runner load sample modules in main(). Any error would cause main() to exit thus skipping rest of tests. Now the samples are loaded when they are executed so errors in loading modules can be caught. - the consumeEventsFromServiceBusQueue sample is calling `process.exit()` upon completion, causing the runner to exit because it currently runs tests in-proc. Update the test to exit with code 1 when failing.
29a9aae
to
6ac6a70
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some comments - happy if we want to change the EG samples a little more so they never call process.exit
even in the failure case, if that would help in cases like this.
} | ||
|
||
main().catch((err) => { | ||
console.error("The sample encountered an error:", err); | ||
process.exit(1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just wondering - would it be better if we rethrew the error or something to allow the harness to know this sample failed but other scenarios could continue to run?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question! This actually doesn't matter to the test runner as it only invokes the sample's main()
function. I was just following the convention of most existing samples.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's possible that process.exit(1)
was used at some point? @witemple-msft I wonder whether you have any ideas as you worked with dev-tool and samples before?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is just a conventional pattern for async scripts in JavaScript. Instead of rethrowing the error, you also exit(1). The problem, fundamentally, is that an unhandled Promise rejection does not result in an error status in some (all?) versions of Node, so explicitly calling exit
is the only way for the script itself to report that it failed to the shell.
|
||
// Bring all samples and includes into memory | ||
for (let entry of manifest) { | ||
console.log(`Importing samples for ${entry.Name}...`); | ||
for (const entry of manifest) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we include a TODO here to start running the ARM samples as well at some point?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wasn't sure about arm- samples. @praveenkuttappan do you know whether we want them to run as part of smoke tests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ckairen should we exclude ARM samples when generating the run manifest json, or is it also desirable to include them?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it will be beneficial for smoke test
previously the runner loads sample modules in
main()
without any error handling so any error would causemain()
to exit thus skip the rest of samples. This PR delays sample loading to when theyare executed so errors in loading modules can be caught and reported.
the
consumeEventsFromServiceBusQueue
sample is callingprocess.exit()
uponcompletion, causing the runner to exit because it currently runs tests in-proc.
Update the test to exit with code 1 when failing.
Packages impacted by this PR
None