-
Software version: Mech-Eye Viewer 2.1.0.
-
Camera model: NANO.
-
After using the Correct intrinsic parameters function to correct the intrinsic parameters in Mech-Eye Viewer, I tried to read the parameters through the SDK. However, the results before and after using the function were the same.
Besides, before using that function, there was a significant deviation in the intrinsic parameters (highlighted in red), but after using that function, the display appeared normal.
I also tried restarting the computer after using that function, but the results remained the same as the initial parameters.
Hello, you mentioned that after using the Correct intrinsic parameters function in Mech-Eye Viewer, the intrinsic parameter matrixes obtained through Mech-Eye API remain unchanged.
This is normal because the Correct intrinsic parameters function corrects the positional relationships between modules inside the camera, such as between the projector and the 2D camera, left and right 2D cameras, etc. The corrected data is then used to generate depth map data internally within the camera.
Therefore, for monocular cameras like PRO and NANO, the outputted camera intrinsic parameter matrixes, remain unchanged before and after using the Correct intrinsic parameters function, and it will not affect their normal usage.
Does this mean that the external internal parameter data of any camera will not change?
Only the positional relationship data between internal modules will change. Can this data be obtained using the API?
The NANO is a monocular 3D camera. The Correct intrinsic parameters function for this camera involves adjusting the relationship between the projector and the 2D camera and does not alter the intrinsic parameter matrixes.
Therefore, intrinsic parameter matrixes of NANO remain unchanged after using the Correct intrinsic parameters function. Using this function also does not affect your ability to obtain intrinsic parameter matrixes from Mech-Eye API.
Dear you all. I see the same behaviour for UHP-140. Before re calibrating the intrinsics, I’ve have saved Cam1, Cam2, CamMerged intrinsics to files using the C# SDK. Then using Mech Eye Viewer “Intrinsic Parameter Tool” for CamMerge (UHP Capture mode 2) the check results returned a “too high mean feature point positional error (197.74 µm)”. So I “corrected intrinsic parameters” using around 40 images. Now the “Check Results” returns OK results. BUT, calling again the intrinsics values using the C# SDK, I still have all the same values (textureCameraIntri, depthCameraIntri, textureToDepth). How can this be ? If these values do not change, but calibration is updated, then this should mean that the returned images and depthmap are rectified differently so that the “same old returned intrinsic values” are now valid on the newly rectified images and point cloud. I am correct ? Can you help me because I am on customer site, and fear to have successfully calibrated my sensor but “unfinished / unsaved” my work. Best Regards.
Hello, don’t worry. The correction parameters you applied have been written into the camera’s internal settings. Although the parameters (textureCameraIntri, depthCameraIntri, textureToDepth) you obtained haven’t changed, the relative position between the camera and the projector has been corrected. The parameters (textureCameraIntri, depthCameraIntri, textureToDepth) you obtained can be used as usual.
During the recent video communication:
The adjustments made using the Mech Eye Viewer “Intrinsic Parameter Tool” for CamMerge were focused on modifying the internal positioning relationship between the cameras. These modifications are not reflected in the camera intrinsic parameters obtained through the MechEye API. Therefore, it is normal that there is no observable change in the camera intrinsic parameters obtained before and after the correction.
The specific steps involve operating in camera1 mode. The intrinsic parameters obtained in this mode are utilized for the extrinsic calibration between the camera and the robot. The results of the extrinsic calibration are then combined with the textureToDepth matrix obtained in fusion mode for further computations.