Replies: 23 comments
-
Isn't the --verify option guaranteed the same performance? |
Beta Was this translation helpful? Give feedback.
-
For the post-processing part of onnx model, do I need to rewrite a code to decode the output of onnx model? Is there any relevant code for reference? |
Beta Was this translation helpful? Give feedback.
-
If you only use the quarter shift, it will simply be applicable in C++. It will work well even if you write and paste post-processing in the last layer with Python code. However, it depends on which model is exported. What model are you working on? |
Beta Was this translation helpful? Give feedback.
-
Hi @HoBeom , I am using hrnet and got the NxMx64x64 heatmap through the onnx model. However, which part in this library can i used to convert the 64x64 heatmap to NxMx2 with value of x and y axis? |
Beta Was this translation helpful? Give feedback.
-
@GavinOneCup please check |
Beta Was this translation helpful? Give feedback.
-
Hi @HoBeom , I got the onnx model of higher_hrnet32_coco_512_512 after using the pytorch2onnx.py and I got its output(layer 3645[1, 34, 128, 128], layer 3678[1, 17, 256, 256]). But how to parse these results to get the final human body 17 key points? |
Beta Was this translation helpful? Give feedback.
-
Hi @jin-s13 , how do I suppose to set up these parameters like unbiased and post_process? I used this function but it gives me wrong x-and- y results, which seems not because of these parameters. Thanks. |
Beta Was this translation helpful? Give feedback.
-
@LSC333 Hi, the format (256x256) and (128x128) you got called heatmaps. You can use the function we talked above you to convert it to x-and-y axis. However, it seems the function has some problem? I use the default set up and it cannot give me the correct x-and-y axis value. If you can get the correct value, please let me know. Thanks. |
Beta Was this translation helpful? Give feedback.
-
@GavinOneCup @LSC333 The above mmpose/mmpose/models/detectors/bottom_up.py Line 197 in b6092dd |
Beta Was this translation helpful? Give feedback.
-
@jin-s13 |
Beta Was this translation helpful? Give feedback.
-
For animal and face, the decoding is the same as top-down body pose estimator, i.e. |
Beta Was this translation helpful? Give feedback.
-
@jin-s13 Yeah I am using |
Beta Was this translation helpful? Give feedback.
-
I want to double check if it's onnx's bug or the code's bug. Can you tell me how to run the checkpoint pth file and the config file in the classic pytorch way (i.e. use the function model() to get the result). |
Beta Was this translation helpful? Give feedback.
-
After some experiments, I found that the results from the onnx model are different as the results generated in top_down_img_demo.py. I tested the results at line 313 in the file inference.py, even the output results of heatmap are different as the heatmap results on onnx model. Can you explain why? |
Beta Was this translation helpful? Give feedback.
-
@GavinOneCup It seems that your onnx transform is not successful. Have you tried |
Beta Was this translation helpful? Give feedback.
-
@jin-s13 I tried it just now. I got the result: "The numerical values are same between Pytorch and ONNX" To be clear, I thinkg I got the similar problem as LSC333 said above. The output I want is the keypoints with N * M * (x, y, prediction). And the output at line 313 in the file inference.py can give me this result. It can also gives me the results of a heatmap. However, the results from the onnx model is just a heatmap with size N* M* 64* 64, where 64 is my heatmap size. And this output is different as the heatmap value generated by inference.py. |
Beta Was this translation helpful? Give feedback.
-
I think what happened here is that model() is generated by build_posenet(cfg.model) in inference.py directly. However, it is also converted by _convert_batchnorm(model) in pytorch2onnx.py. It seems that i need to convert it back after I got the onnx model? |
Beta Was this translation helpful? Give feedback.
-
@jin-s13 @GavinOneCup I used the hrnet_w48_coco_256x192.onnx to do some testing work, and finally seemed to get the correct analysis result,but why should the scale inside be divided by 200? If I want to predict my own picture, how much should I divide by? |
Beta Was this translation helpful? Give feedback.
-
@LSC333 Please refer to #205 for more information about pixel_std. |
Beta Was this translation helpful? Give feedback.
-
@LSC333 Hello, thanks for blog! So we should input cropped persons to onnx model? Thanks in advance! |
Beta Was this translation helpful? Give feedback.
-
This is my understanding: for the top down method, we need to send the cropped person and the size of the cropped person(x, y, w, h) into the model. Here x and y refer to the coordinates of the upper left corner |
Beta Was this translation helpful? Give feedback.
-
Hi. I have the same question - HRNet model has a raw output - heatmap in ONNX format. Is it exist any method to include decode process to the model? |
Beta Was this translation helpful? Give feedback.
-
After run pytorch2onnx.py, I have got the onnx model, but I have not found the relevant documents that can infer it to check the correctness of resultes of the onnx model. How can I do it? Or do you have the relevant reasoning scripts for reference ONNX MODEL? I need your help
Beta Was this translation helpful? Give feedback.
All reactions