This post compiles the latest questions and answers from the Q&A category translated by AI. The questions were posted between November 4, 2023, and November 10, 2023.
Please note: AI has its limitations, and specific content should still be discerned based on the actual context.
The first image is from the official website. How can I achieve the same result as shown in the first image on the official website and obtain the coordinates of all the point clouds as shown in the second image? In total, there are 50 point clouds, so I want to obtain 50 coordinates.
Hello, in the official website example, the entire point cloud is used, and then the spraying path is calculated based on the on-site requirements, rather than directly using a specific point from the point cloud as the path pose.
For more information, please contact out technical support.
- Software Version: Mech-Viz 1.7.4
- Robot Model: BORUNTE BRTIRUS3511A
When applying the one-four axis offset, adding the Y-offset parameter doesn’t correct the robot’s posture.
Even with the addition of the seventh DH parameter, the issue still can’t be resolved.
Furthermore, there is no information on this part in the [robot]_algo.json file’s parameter explanation. Can this be added?
Hello, based on the robot’s configuration, it belongs to a special configuration rather than a standard six-axis robot. Therefore, a robot model should be created according to this special configuration.
- When the PRO M camera captures 2D images and depth maps, what internal calculations are involved?
- Mech-Eye Viewer can display 3D calculations, including Surface Smoothing, Outlier Removal, and Noise Removal. What is the sequence of execution for these three steps, and are there any other steps besides these?
- After sampling the 2D image, are there any image processing steps? If so, what is the sequence for these steps?
For 2D image capture with SDK, once the camera captures a 2D image, it undergoes lens distortion correction using the camera’s internal parameters.
As for obtaining the depth map, a sequence of structured light-encoded images is captured, and then the corresponding spatial positions are calculated based on the relationships between corresponding points and the calibrated camera parameters.
You can refer to the “Sequential encoding” chapter in 3D optical measurement technology: Triangulation (passive binocular measurement and structured light) for more information.
Currently, there are Surface Smoothing, Noise Removal, Outlier Removal, and Stripe Contrast Threshold settings. The Guru level includes Edge Preservation and Minimum Fringe Intensity Threshold.
You can adjust these settings to handle specific point cloud scenarios or to preserve more details.
For detailed explanations, please refer to the Mech-Eye Viewer software’s functionality overview.
The three mentioned steps can be enabled or disabled independently, used individually, or combined according to specific requirements. There is no fixed sequence of execution.
After capturing 2D images with SDK, lens distortion correction is applied to the images using the camera’s internal parameters.
Thank you for your response,
My understanding is that the depth map is a single-channel 1920x1200 image that only stores the depth information of Z.
If the processing of the depth map in the camera includes handling point cloud data, where is the X and Y information of the point cloud stored?
Hello, what you’re referring to here is the method of converting a depth map into point clouds. You can refer to: Depth map to point cloud conversion sample (C++) (with diagrams and formulas).
I’d like to know:
- Does the depth map only store information about the Z depth?
- When are the X and Y data of the point cloud generated?
- In Mech-Eye Viewer’s depth map pixels, do they contain 3D X and Y information? Is the point cloud generated inside the camera after capturing the depth map, or is it generated on the local PC?
Yes, the depth map only stores depth information.
For 2 and 3, the depth map is generated on the camera side and transmitted to the PC. Using the intrinsic parameters, the depth map converts pixel coordinates into point cloud XY (see the link above for details). The point cloud data is calculated after collecting the fringe image data.
So, in Mech-Eye Viewer, when are the parameters for point cloud processing applied?
After obtaining the raw point cloud, it undergoes post-processing based on the parameters for point cloud processing.
Can I understand that the imaging steps for the depth map only include projection + fringe decoding + transmission?
|Local PC calculation
|Capture + Image correction + Transmission
|Projection + Fringe decoding + Transmission
|Generate raw point cloud + Point cloud processing (using Mech-Eye Viewer point cloud processing parameters)
Yes, all these operations are performed on the camera side, and the results are finally transmitted to the PC.
If the workpiece to be picked is very long, for example, around 5 meters, it requires the simultaneous picking by two robots. Vision calibration is performed between one of the robots and the vision system, identifying and mapping two picking points, and communicating with the PLC to provide two picking points and labels. The PLC then sends these labels to the two robots separately. However, there is also a need for positional transformation between the two robots.
In this situation, how is collaborative operation typically achieved, and are there any special requirements for the involved robots?
There are two methods:
- Establish a common world reference frame between the two robots. Get the poses in this world reference frame, add labels, and then send them to the PLC. If the robots support simulation of two robots working together, you can perform collision detection between the two robots. However, it is generally advisable to minimize the risk of collisions between the robots.
- Individually calibrate the camera’s extrinsic parameters to each of the two robots. Use two separate programs, one for capturing images and the other for processing the images, to call their respective extrinsic parameters. Calculate the corresponding poses separately and send them to the PLC.
Usually, major robot brands support collaboration between two robots, but some robots may not (such as Jaka). Although the PLC can send coordinates to both robots simultaneously, it cannot perform collision detection between the robots.
To give an example: the Steps “Down-Sample Point Cloud” and “Point Cloud Clustering” may not necessarily require the use of normal information to perform downsampling.
However, due to the point cloud format requirements of these Steps, unnecessary time must be spent calculating normals.
I would like to inquire why normal information is mandatory for point cloud Steps in Mech-Vision that may not necessarily need it?
Generally, for recognition and most point cloud processing tasks, normal information is indispensable (both downsampling and clustering algorithms often rely on normal information).
Therefore, to minimize changes to the interface, when designing the Steps, the point cloud type was set as PointNormal.
Could you please provide information on how Mech-Mind encodes or processes depth images? How can I convert the depth values obtained from the image below into real depth values?
You can use Mech-Eye API to obtain and process depth maps. For instructions on running Mech-Eye API Python routines on Windows, please refer to: Python (Windows).
For the data types obtained through the Mech-Eye API, please see: Image data format in Mech-Eye API.
If you choose not to use Mech-Eye API, it is recommended to save the depth map in TIFF format using Mech-Eye Viewer. The saving results will be consistent with Mech-Eye API, and you can refer to the link provided above regarding Mech-Eye API’s output data format introduction.
If Mech-Vision is your only option for saving, you can refer to this Q&A: How to convert a depth map saved in Mech-Vision’s 32-bit PNG image format with a grid into the TIFF format image as saved by Mech-Eye Viewer.
I’m sorry, but I couldn’t resolve this issue.
captured and saved the depth map using Mech-Eye Viewer. Now, I need to write Python code to read the saved depth map and obtain the depth value at a specified pixel position.
It is recommended to directly use Mech-Eye API, which includes a complete process from obtaining a depth map to generating a point cloud (there is a corresponding Python routine). You can find an example here: Depth map to point cloud conversion sample (C++) (with diagrams and formulas).
Refer to the latter part of the document for methods to read the depth value at a pixel point. Specific implementation requires following the instructions in the document: Python (Windows).
I referred to this code but couldn’t determine the storage format of the depth map. I want to obtain the depth value at a specified pixel without invoking the API.
Mech-Eye Viewer saves depth maps in TIFF format, which is a 32-bit floating-point single-channel format solely preserving depth data. For more details, refer to this link: Image data format in Mech-Eye API
Additionally, this method does not require any additional format conversion. You can try the example below:
Thank you; I can now read the depth information of pixel points from the TIFF file.
If I save it in PNG format, how can I read the depth information of pixel points from a PNG depth image?
Reading depth information from PNG-formatted depth maps requires data conversion: How to convert a depth map saved in Mech-Vision’s 32-bit PNG image format with a grid into the TIFF format image as saved by Mech-Eye Viewer.
It is advisable to use the TIFF format. PNG format is specific to Mech-Vision software, and the commonly used method for storing depth maps is in TIFF format.
Software Version: 1.7.4
Question: How can certain processes from one Mech-Viz project be replicated to another?
Place all desired process components into the same Procedure.
Right-click on the Procedure and select “Export Selected Procedure to File”, as shown in the image below.
Open another project, right-click in an empty space, and choose “Import Procedure From File”, as shown in the image below.
Thank you for the information. The method works, but I noticed that after importing fixed movement positions from Project A to Project B, the data changes. What could be the reason for this?
Current software version: 1.7.4
Apologies for the inconvenience. Indeed, certain step parameters cannot be retained.
This issue will be addressed in the upcoming 1.8 version, and it will be resolved upon its release.