You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Sometimes when ct is printing logs of containers, ct will hang indefinitely and will need to be killed.
What you expected to happen:
ct to not hang when printing out container logs to stdout
How to reproduce it (as minimally and precisely as possible):
This is pretty easy to reproduce for me, though I don't know about others. The best examples I can show are some outputs from some Action runs we have.
We maintain a Helm chart that has many dependencies and most of the time the log output of our main chart (TimescaleDB) will hang ct. You can see a good example of this scenario with this job run (make e2e). This job ended up failing, not because the job failed but because the timeout of the run was exhausted at ~320 min. I can reproduce this locally and actually let it sit for a total of 2 days to see what would happen. ct never recovered and it was stuck in this state until I forced an exit of the process (CTL-C).
Here is an example of the log output of ct and it being successful with the same chart data: job (make e2e)
I have more examples, but the outcome really doesn't follow a pattern. A majority of the time ct will hang, and then sometimes will work fine if re-queued, sometimes not.
Anything else we need to know:
There is another report of this happening in this issue: #332 (comment)
I did apply the kubectl-timeout option that was added with #360, but it does not help with this issue for me.
The text was updated successfully, but these errors were encountered:
Is this a request for help?: No
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
Version of Helm and Kubernetes:
What happened:
Sometimes when
ct
is printing logs of containers,ct
will hang indefinitely and will need to be killed.What you expected to happen:
ct
to not hang when printing out container logs to stdoutHow to reproduce it (as minimally and precisely as possible):
This is pretty easy to reproduce for me, though I don't know about others. The best examples I can show are some outputs from some Action runs we have.
We maintain a Helm chart that has many dependencies and most of the time the log output of our main chart (TimescaleDB) will hang
ct
. You can see a good example of this scenario with this job run (make e2e). This job ended up failing, not because the job failed but because the timeout of the run was exhausted at ~320 min. I can reproduce this locally and actually let it sit for a total of 2 days to see what would happen.ct
never recovered and it was stuck in this state until I forced an exit of the process (CTL-C).Here is an example of the log output of
ct
and it being successful with the same chart data: job (make e2e)I have more examples, but the outcome really doesn't follow a pattern. A majority of the time
ct
will hang, and then sometimes will work fine if re-queued, sometimes not.Anything else we need to know:
There is another report of this happening in this issue: #332 (comment)
I did apply the
kubectl-timeout
option that was added with #360, but it does not help with this issue for me.The text was updated successfully, but these errors were encountered: