Hello,
I have a mecheye deep camera. My project is to find boxes on a pallet at different heights. Using the API and OpenCV I’m trying to overlay the deepmap and the 2D image, but I’m having some problems.
My first doubt is if I have to calibrate the cameras to be able to join both images. I suppose that the deepmap will not be necessary, but I have doubts with the 2D camera.
I have also tried to use the function ‘CameraIntrinsics()’ that returns the translation and rotation between both images, but I have not got any optimal result.
Could you help me with my problem?
Best regards and thanks
Hello,
That’s almost what I want to do, but I don’t want to get a point cloud, I want to keep only the parts of the 2D image at a certain distance from the camera.
Perhaps you could try using the CaptureStereo2DImages example directly to obtain the 2D images needed for generating depth maps. This way, they can directly correspond to the depth maps.
To transfer point cloud data directly from the TexturedPointCloud format to an Open3D point cloud without saving a .ply file.
This direct data transfer can greatly improve efficiency by removing the need to save and reload the point cloud data from disk.