Convert Mech-Eye intrinsic parameters for use in Halcon

1. Obtaining Mech-Eye Industrial 3D Cameras intrinsic parameters

Intrinsic parameters of Mech-Eye Industrial 3D Cameras can be acquired using the ‘getDeviceIntri’ interface in the API. You can refer to this sample for specific usage instructions:

GitHub Mech-Eye API GetCameraIntri

2. Printed intrinsic parameters

Connect Mech-Eye Successfully.
CameraDistCoeffs: k1: 0.0, k2: 0.0, p1: 0.0, p2: 0.0, k3: 0.0
DepthDistCoeffs: k1: 0.0, k2: 0.0, p1: 0.0, p2: 0.0, k3: 0.0
name: CameraMatrix
[2552.997597580249, 2552.997597580249
1587.164794921875, 1569.4437408447266]
name: DepthMatrix
[1276.4987987901245, 1276.4987987901245
793.3323974609375, 784.4718704223633]
Disconnected from the Mech-Eye device successfully.

3. Converting Mech-Eye Industrial 3D Cameras intrinsic parameters to Halcon ones

Halcon provides the “cam_mat_to_cam_par” operator for conversion, allowing you to convert data from DepthMatrix into CameraParam required by Halcon. This operator is described as follows:

cam_mat_to_cam_par( : : DepthMatrix, Kappa, ImageWidth, ImageHeight : CameraParam)

For detailed usage instructions, please refer to: cam_mat_to_cam_par (Operator).

In the operator “cam_mat_to_cam_par”, the format of DepthMatrix is as follows:

[ fx 0 cx
  0 fy cy
  0  0  1]

While in Mech-Eye SDK, it is printed as:

  1. Mech-Eye SDK version 2.0.2 or later:
[ fx 0 cx
  0 fy cy
  0  0  1]
  1. Mech-Eye SDK version 2.0.0:
[ fx, fy
  cx, cy]

By inputting acquired DepthMatrix (please pay attention to the corresponding relationships) and the actual dimensions of the image in Halcon, you can obtain the Halcon-formatted CameraParam:

cam_mat_to_cam_par ([1276.4987987901245, 0., 793.3323974609375,0., 1276.4987987901245, 784.4718704223633, 0., 0., 1.], 0, 1280, 1024, CameraParam)

Important notes

  1. Please be aware that there may be variations in the intrinsic parameters between 2D images and depth maps in certain camera models. Pay attention to the types of inputted camera intrinsic parameters.
  2. To maintain data type consistency, input floating-point data into DepthMatrix.
  3. All images output from Mech-Eye API have undergone distortion correction, so the distortion coefficients are all set to 0.

Thank you for the post.

How does it work for the UHP-140 when you use the UHP capture mode “merge” ?

Because in this case, the distortion coeffs are not null.

Bellow, some values when changing the UHP capture mode

For Capture Mode == 0 (meaning “Camera1”)

{
  "textureCameraIntri": {
    "k1": 0.0,
    "k2": 0.0,
    "p1": 0.0,
    "p2": 0.0,
    "k3": 0.0,
    "fx": 4747.108985849909,
    "fy": 4747.11567589157,
    "cx": 1009.5485500489403,
    "cy": 757.6421028579348
  },
  "depthCameraIntri": {
    "k1": 0.0,
    "k2": 0.0,
    "p1": 0.0,
    "p2": 0.0,
    "k3": 0.0,
    "fx": 4747.108985849909,
    "fy": 4747.11567589157,
    "cx": 1009.5485500489403,
    "cy": 757.6421028579348
  },
  "textureToDepth": {
    "r1": 1.0,
    "r2": 0.0,
    "r3": 0.0,
    "r4": 0.0,
    "r5": 1.0,
    "r6": 0.0,
    "r7": 0.0,
    "r8": 0.0,
    "r9": 1.0,
    "x": 0.0,
    "y": 0.0,
    "z": 0.0
  }
}

For Capture Mode == 1 (meaning “Camera2”)

{
  "textureCameraIntri": {
    "k1": 0.0,
    "k2": 0.0,
    "p1": 0.0,
    "p2": 0.0,
    "k3": 0.0,
    "fx": 4750.00837788406,
    "fy": 4750.247888513961,
    "cx": 994.2353845822166,
    "cy": 755.7524129808173
  },
  "depthCameraIntri": {
    "k1": 0.0,
    "k2": 0.0,
    "p1": 0.0,
    "p2": 0.0,
    "k3": 0.0,
    "fx": 4750.00837788406,
    "fy": 4750.247888513961,
    "cx": 994.2353845822166,
    "cy": 755.7524129808173
  },
  "textureToDepth": {
    "r1": 1.0,
    "r2": 0.0,
    "r3": 0.0,
    "r4": 0.0,
    "r5": 1.0,
    "r6": 0.0,
    "r7": 0.0,
    "r8": 0.0,
    "r9": 1.0,
    "x": 0.0,
    "y": 0.0,
    "z": 0.0
  }
}

For Capture Mode == 2 (meaning “Merge”)

{
  "textureCameraIntri": {
    "k1": 0.0,
    "k2": 0.0,
    "p1": 0.0,
    "p2": 0.0,
    "k3": 0.0,
    "fx": 4747.108985849909,
    "fy": 4747.11567589157,
    "cx": 1009.5485500489403,
    "cy": 757.6421028579348
  },
  "depthCameraIntri": {
    "k1": -0.0529987939931696,
    "k2": -0.11534423411217257,
    "p1": -6.61582570809633E-05,
    "p2": 0.0007696073542978535,
    "k3": 0.0,
    "fx": 3595.4090687615135,
    "fy": 3596.5103935870816,
    "cx": 1001.2064970672418,
    "cy": 573.4680082060822
  },
  "textureToDepth": {
    "r1": 0.969719902794587,
    "r2": -0.0018571571922590098,
    "r3": -0.2442127373648228,
    "r4": 0.001096328425810457,
    "r5": 0.9999941134603708,
    "r6": -0.0032513241286656726,
    "r7": 0.2442173380168563,
    "r8": 0.002885136352085333,
    "r9": 0.9697162305541711,
    "x": 83.41996193655886,
    "y": 0.09613590912086897,
    "z": 21.671747919143296
  }
}

thanks in advance

Also, can you explain what textureToDepth is (when it is not null)? This does not look like the transform from texture to depth when UHP capture is set to “merge”. Because the 2D contours of the 2D monochrome image look like aligned with the DepthImage. This transform looks like the relation between camera1 and “merge”. But I may be wrong, especially because I see a small shift (something like 0.15mm) between the point clouds if I use this transform to move the point could from “merge” to camera1.
thanks in advance
boris

We apologize for the late reply. It’s now national day holiday in China. I am contacting my colleague to look into your question. Thank you for your patience.

Thank you for your feedback. The question you raised is about differentiating the intrinsic parameters obtained from Camera1, Camera2, and Merge modes.

Firstly, both Camera1 and Camera2 can be considered as monocular cameras, so the values of textureCameraIntri and depthCameraIntri are the same for both modes, and there is no textureToDepth value.

For the Merge mode, it can be considered as a stereo camera, so its parameters are based on the two cameras, Camera1 and Camera2. The main camera is Camera1, and the textureToDepth parameter represents the relative position between the two cameras.

It is important to note that the 2D images outputted by the cameras are already undistorted, so there is no need to calculate distortion parameters.