Hi,
When I detect an object in the RGB image, I need to locate the corresponding point in the depth image that most accurately represents the same object point. Is there a straightforward way to achieve this with the Mech-Mind Python API 2.4 , ideally by obtaining the x, y coordinates in the depth image that match those in the RGB image?
I have attempted to write a custom function for this, but it delivers imprecise results. To troubleshoot, I have a few additional questions:
- Is it normal for the principal point offsets to be shifted by more than 20 pixels from the image center, or should I verify the intrinsic calibration? These cameras should be intrinsically calibrated at the factory, correct?
- Are the images processed onboard in any way (e.g., scaled, cropped, shifted), which might prevent straightforward overlay using intrinsic and extrinsic parameters? Such transformations could affect direct pixel mapping if applied by the camera itself.
Any insights or advice would be greatly appreciated!