Difference of coordinates between RGB camera and point cloud scanner?

Dear all,

we are using Mech-Eye Pro M to capture RGB images and point cloud. The RGB images contain Acuro Markers for estimating the pose of the markers relative to the camera, and the rotation and translation calculated from that are used for transforming the captured point cloud to our desired pose.

The code worked well on our old 3D camera, but on Mech-Eye, we continuously observed a displacement between the transformed point cloud and the desired pose. The rotation is correct, but the translation is not. I kindly want to ask if the RGB camera and point cloud scanner share the same coordinates (origin, directions of axes…).

Best regards,
Yinglei

Hi yinglei
1、First, it can be confirmed that RGB images and point clouds share the same coordinate system.
2、I think you can determine the current status of the issue by examining the texture point cloud matching


3、You need to review the code again according to the camera API interface
https://docs.mech-mind.net/en/eye-3d-camera/latest/api/samples.html

Hi Liuqing,

thank you for your reply!

We found that there is a scaling factor between the point cloud and the actual object size. E.g. the original size is 0.182m, but the length in the point cloud is 0.109m. We used MeshLab to measure the distance. When we rescaled the point cloud by around 1.65, the translation error disappeared, and the pose estimation worked well. Is there any explanation for this scaling factor or could we probably do something wrong during scanning?

Best regards,
Yinglei

Hi yinglei
I apologize for the delay in replying.for your question, I’ve summarized three points:

1.In point cloud measurements, there is no scaling factor involved. Distances represent the actual distances of the objects.
2.I think you can check the instrinsic of camera.
3.If your camera indeed has the issue related to the instinsic, please refer to the following guidance

Best regards