You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have created a simple demo on how to feed realsense depth data into the pose network. The problem is that I do not have a specific configuration for the realsense device and I guess that is the reason why the output currently is useless.
Would be great if you could give me a hint in the right direction, or if it is even worth to look into it. Or do you think I would have to retrain the network on the realsense data itself?
The text was updated successfully, but these errors were encountered:
2D and 3D estimators were trained with kinect 2 data and axus xtion.
In my experience with intel realsense, the noise is even more problematic
to estimate even 2D poses.
I would recommend to retrain the 3D and possibly the 2D estimator.
Do you have an example of the output of the 2D and 3D pose?
I have created a simple demo on how to feed realsense depth data into the pose network. The problem is that I do not have a specific configuration for the realsense device and I guess that is the reason why the output currently is useless.
Would be great if you could give me a hint in the right direction, or if it is even worth to look into it. Or do you think I would have to retrain the network on the realsense data itself?
The text was updated successfully, but these errors were encountered: