Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Thank you! #10

Open
raultron opened this issue Feb 16, 2018 · 10 comments
Open

Thank you! #10

raultron opened this issue Feb 16, 2018 · 10 comments

Comments

@raultron
Copy link

Not really an Issue, just wanted to thank you. This package helped me with a crucial calibration that I needed on a tight schedule.

@nathanlem1
Copy link

Can you put the procedures you use to build using ViSP as a backend, pls?

@marcoesposito1988
Copy link
Collaborator

thanks a lot @raultron! always nice and encouraging to hear good feedback.

@marcoesposito1988
Copy link
Collaborator

@nathanlem1: it would be great if you could create a new issue for each topic, to keep the repo organised.

I am using the ros-visp wrapper, which offers ROS interfaces (topics and services) for the ViSP library. In particular, I am using the hand2eye service.

The wrapper can be installed via apt in Ubuntu, as it is automatically packaged by the ROS community:
sudo apt-get install ros-lunar-vision-visp
After installing, you can start a roscore and the calibration service, as described in the docs.

@Sinchiguano
Copy link

Not really an Issue, just wanted to thank you. This package helped me with a crucial calibration that I needed on a tight schedule.

Hi, could you please let me know in what way you did the calibration? I am still fighting to get good result, every time that I compute the calibration with a new example I got different results. And all of them are wrong results.

I am performing the eye in hand calibration with a RealSense d435 camera and a UR10 robot.

Thanks in advance

@raultron
Copy link
Author

raultron commented Aug 12, 2019

Hi there,

My calibration setup was somehow specific. I have a camera with some spherical markers placed on top of it which are then detected by an Optitrack system with millimeter accuracy. I wanted to have the transform between camera coordinates and spherical markers coordinates.

For this case the camera has to move, and the Aruco marker is fixed in space, I had to setup the topics in the correct way for easy_handeye with a launch file. I placed the Aruco marker on a table looking at the camera and the camera on a tripod and moved it around changing both relative orientation and distance, more than 20 measurements. I was careful that the setup was completely static in each measurement and I tried to cover evenly the working space volume. Easy_handeye gave me then the transform between camera and spherical markers coordinates.

I also performed an additional calibration.

I placed additional spherical markers on the Aruco paper and then I needed the transform from spherical optritrack coordinates to the center of the Aruco marker. For that case I changed the topics with a launch file so now the camera is the one fixed and the Aruco marker is the one that moves.

With those two calibrations I could obtain ground truth measurements for my camera to Aruco marker detection, all the measurements where consistent with what I was measuring so the calibrations were correct.

@Sinchiguano
Copy link

Thank you very much for your detailed answer, I appreciate it. I hope I can make it and then I will share the experience.

Have a good one.
Cesar

@marcoesposito1988
Copy link
Collaborator

@Sinchiguano, did you manage to solve your issues? Off the top of my head, they may be due to a couple of factors, including:

  • the marker has a different size than what is passed into the aruco configuration (printers love to scale down PDFs, so every time I check with a ruler and have to throw away a couple of printouts)
  • you sample the transforms just after the robot has stopped moving, and the tf frames have not reached the final position yet because of some lag
  • you have outliers in your sampled transforms, for example because in some case the marker was not detected at all, but the old position was lingering around in tf
  • the tracking is not stable because the marker appears too small in the image, or it is parallel to the image plane of the camera (in this case the detection can be very unstable)
  • the calibration of the camera is totally bananas (uncommon with RGB-D cameras since they typically come with a decent factory calibration that is picked up by the ROS driver, but definitely possible with RGB cameras if you didn't do the calibration yourself)

@Sinchiguano
Copy link

Hey,

Thanks a lot for your detailed explanation. It is really helpful. Everything that you detailed, makes absolute sense since I saw that I was doing most of the stuff wrongly or with less care. I now understand what I was doing wrong according to your advice.

I think with your tricks, the calibration is gonna work.

Best regards
Cesar

@Sinchiguano
Copy link

Sinchiguano commented Aug 27, 2019

Hey, I just wanted to thank you for this repository, finally I succeed on the robot-camera calibration. I did the two types of calibration, eye-in-hand and eye-on-base calibration without any problem, but without your tips I would not make it.
Best regards
Cesar

@wangqingyu985
Copy link

Kindly thanks to the authors of this excellent repo!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants