Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ctrl+C Doesn't Exit Benchmark #313

Closed
1 of 2 tasks
yeldarby opened this issue Mar 10, 2024 · 1 comment · Fixed by #325
Closed
1 of 2 tasks

Ctrl+C Doesn't Exit Benchmark #313

yeldarby opened this issue Mar 10, 2024 · 1 comment · Fixed by #325
Labels
bug Something isn't working

Comments

@yeldarby
Copy link
Contributor

Search before asking

  • I have searched the Inference issues and found no similar bug report.

Bug

I got a configuration error so want to exit the CLI benchmark before it finishes but Ctrl+C doesn't do anything. It just keeps going.

[email protected]:/$ inference benchmark python-package-speed -m "yolov8n-640"
Loading images...: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:04<00:00,  1.79it/s]
Detected images dimensions: {(612, 612), (440, 640), (427, 640), (500, 375), (334, 500), (480, 640), (375, 500)}
Inference will be executed with the following parameters: {}
2024-03-10 21:00:56.403012156 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:640 CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Please reference https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements to ensure all dependencies are met.
Model details | task_type=object-detection | model_type=yolov8n | batch_size=batch | input_height=640 | input_width=640
Warming up model...: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:07<00:00,  1.31it/s]
avg: 798.6ms    | rps: 1.2      | p75: 804.1ms  | p90: 804.1    | %err: 0.0
avg: 812.9ms    | rps: 0.9      | p75: 848.7ms  | p90: 894.4    | %err: 0.0
avg: 1287.8ms   | rps: 0.7      | p75: 896.0ms  | p90: 2625.1   | %err: 0.0
avg: 1390.6ms   | rps: 0.7      | p75: 1699.6ms | p90: 2337.5   | %err: 0.0
avg: 1439.3ms   | rps: 0.7      | p75: 1698.5ms | p90: 2125.1   | %err: 0.0
avg: 1460.6ms   | rps: 0.7      | p75: 1698.9ms | p90: 1912.7   | %err: 0.0
^C^C^C^Cavg: 1472.2ms   | rps: 0.7      | p75: 1700.0ms | p90: 1901.5   | %err: 0.0
^C^C^C^Cavg: 1381.2ms   | rps: 0.7      | p75: 1698.5ms | p90: 1845.4   | %err: 0.0
avg: 1321.5ms   | rps: 0.7      | p75: 1694.0ms | p90: 1790.5   | %err: 0.0
avg: 1239.1ms   | rps: 0.8      | p75: 1668.0ms | p90: 1730.3   | %err: 0.0
avg: 1186.2ms   | rps: 0.8      | p75: 1607.6ms | p90: 1700.2   | %err: 0.0
avg: 1129.2ms   | rps: 0.9      | p75: 1599.4ms | p90: 1700.2   | %err: 0.0
avg: 1084.3ms   | rps: 0.9      | p75: 1548.2ms | p90: 1699.6   | %err: 0.0
^C^C^Cavg: 1059.4ms     | rps: 0.9      | p75: 1178.0ms | p90: 1698.8   | %err: 0.0
^C^C^C^C

Environment

Environment

[email protected]:/$ pip freeze | grep inference
inference-cli==0.9.15
inference-gpu==0.9.15
inference-sdk==0.9.15
[email protected]:/$ 

pytorch/pytorch:2.2.0-cuda12.1-cudnn8-devel docker with after running pip install inference-gpu

image

Minimal Reproducible Example

inference benchmark python-package-speed -m "yolov8n-640"

Then try Ctrl+C

Additional

No response

Are you willing to submit a PR?

  • Yes I'd like to help by submitting a PR!
@yeldarby yeldarby added the bug Something isn't working label Mar 10, 2024
@PawelPeczek-Roboflow
Copy link
Collaborator

Yes, while running benchmark we are using multiple threads - we do not steer them in 100% clean way - hence the SIGINT signal is not causing graceful termination. I will have an eye on that.

hvaria added a commit to hvaria/inference that referenced this issue Mar 15, 2024
…rocess

This pull request addresses bug roboflow#313, where users were unable to interrupt the benchmarking process using Ctrl+C, leading to a scenario where the benchmark would continue running indefinitely, ignoring interruption requests. The core of the issue was the absence of a proper signal handling mechanism for keyboard interrupts during the execution of the benchmarking commands.

Added a try-except block around the benchmarking execution commands in benchmark.py. This ensures that a KeyboardInterrupt (e.g., through Ctrl+C) is caught, and a graceful shutdown sequence can be initiated.
Upon catching a KeyboardInterrupt, the script now calls stop_inference_containers() from container_adapter.py. This function halts all running inference containers, ensuring that no orphaned resources are left consuming system resources post-interruption.

Enhanced stop_inference_containers() to handle both interactive and automated environments gracefully.
yeldarby added a commit that referenced this issue Mar 15, 2024
Fix for Bug #313: Ensure Graceful Interruption of Benchmark Process
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants