-
Notifications
You must be signed in to change notification settings - Fork 105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] Jest detects open handles #928
Comments
Val did some investigating in temporalio/samples-typescript#189 |
Test case can be reduced to this:
The issue is specifically with
|
The issue relates to usage of the
Neon's |
Opened a ticket in Neon: neon-bindings/neon#948 |
We hit the same issue in our test setup. Weirdly it causes issues in CircleCI as jest just won't exit despite using |
Is there any known workaround? |
Since the new version we are hitting an issue with Jest on CircleCI as well. |
I'm not aware of any workaround regarding Jest's warning about open handle. This warning message can simply be ignored. This may also cause a 10 seconds delay on process termination, which should generally not be that much of an issue. |
I don't think this is related to the "detected open handle" bug. Wondering if this might be related, though there's unfortunately very little context to work with. @mandriv @Irvenae Let assume this is a different issue... Could you please provide more details on these situations? Here are some ideas info that might help figure this out, but feel free to add whatever that you think could be useful:
|
This is the log we get in circleCI:
In this run it only runs 1 test file. In this test file we only create 1 timeSkipping test env, which we reuse for 12 tests. A successful run would have 3 min later the Worker state changed { state: 'STOPPING' } and etc. We only print info logs I will try to reproduce with debug logs as well. |
I might have been on the wrong track, I today reran this 10 times on CircleCI but no failure. I know for sure this happens when we have multiple tests running, but there it could be OOM as well (so setting node options and/or going to a single Jest worker to see if it is resolved). Anyway, I am also further investigating to potentially pinpoint it better. |
FWIW, we found that we were either running out CPU or Memory, when executing in CircleCI. The only thing that helped us is to reduce number of jest workers. we had not had a single failure since then. |
Not sure if this is the issue here, but I personally recommend to always explicitly set Node's Based on your log, it appears you are executing inside a containerized environment with a memory constraint. Node itself doesn't play great in such cases, because it doesn't know about those constraints and instead configure its heap allocation and garbage collection to based on the machine's total memory. The opposite is also possible: by default, Node will configure its heap allocation limit and garbage collection algorithms to 25% of available memory, up to a limit of 4GB (assuming node 14 to 18, with 64 bits cpu). That means that your Node process may not be taking advantage of all the resources available to it, which could explain that it is not performing as it should. Both cases can be resolved by explicitly setting and properly tuning Node's |
so any progress regarding this issue ? this not just a warning in our cicd its break the jest test and not passing them |
We have resolved it by using a bigger machine and reducing the number of jest workers and setting I am now evaluating AVA tbh, because attaching the debugger is spotty with Jest. |
I am not sure if this is relevant, but i will add here just to get others opinion as well, When I run the test with
Why is this behaviour happening ?, |
Thanks for the feedback. We haven't had the capacity to investigate this further for now, we'll need more work to understand what's going on here. |
We've found a workaround for this when running the summary of the worker workaround is:
export default async function teardown() {
// ask each worker to start shutting down
global.workers.forEach((worker) => {
// do not wait for this, it is non-blocking
worker.shutdown();
});
// the promises created by `Worker.run` will resolve once the worker has actually shutdown
await Promise.all(global.workerPromises);
// fyi - teardown the environment won't work until after the workers disconnect
await global.TemporalEnvironment?.teardown();
} To get your typing correct, you can declare global {
var TemporalEnvironment: TestWorkflowEnvironment;
var workers: Worker[];
var workerPromises: Promise[];
} We have found that the most "sane" configuration of this is to use a jest globalSetup to start the In our worker repos, using Node 18 and temporalio/* @ 1.9.0, this closes all of our handles and lets jest exit gracefully. We're still unable to work around the open handles from the |
It turns out that the handles opened by {
...,
"globalTeardown": "tests/teardown.ts"
} teardown.ts import { Runtime } from '@temporalio/worker';
export default async function() {
await Runtime.getInstance().shutdown();
} and your open handles will get cleaned up. There is some delay (a few seconds) after the tests finish while the Runtime shuts down, but it stops jest from complaining! |
Thanks a lot @jbsil for sharing your findings on this issue. It will certainly help a few of our users. I have to admit that I'm very intrigued by why this solution actually resolves the symptom, as I'm pretty sure that this will not force unloading of Neon's global |
This is still ongoing issue, sometimes it also takes sometime before everything is terminated according to jest runner. |
We are seeing this for Jest has detected the following 1 open handle potentially keeping Jest from exiting:
● neon threadsafe function
34 |
35 | beforeEach(async () => {
> 36 | env = new MockActivityEnvironment({ attempt: 2 });
| ^
37 |
38 | const compiledSubject = await compileTemplate({
39 | mergeFields: allMergeFields,
at ../../../../node_modules/@temporalio/core-bridge/index.js:16:14
at Function.create (../../../../node_modules/@temporalio/worker/src/runtime.ts:202:31)
at Function.instance (../../../../node_modules/@temporalio/worker/src/runtime.ts:194:17)
at Activity.makeActivityLogger (../../../../node_modules/@temporalio/worker/src/activity.ts:82:34)
at new Activity (../../../../node_modules/@temporalio/worker/src/activity.ts:61:12)
at new MockActivityEnvironment (../../../../node_modules/@temporalio/testing/src/index.ts:416:21)
at Object.<anonymous> (src/lib/actions/emails/prepareEmailAndMessageData.spec.ts:36:11) |
Getting this warning when running a simple test with the test environment:
Code for reproduction:
The text was updated successfully, but these errors were encountered: