AI-translated latest Q&A posts' collection (2023/11/04–2023/11/10)

This post compiles the latest questions and answers from the Q&A category translated by AI. The questions were posted between November 4, 2023, and November 10, 2023.
Please note: AI has its limitations, and specific content should still be discerned based on the actual context.

Reference Frame Acquisition

1. Q (2023/11/04)

image

The first image is from the official website. How can I achieve the same result as shown in the first image on the official website and obtain the coordinates of all the point clouds as shown in the second image? In total, there are 50 point clouds, so I want to obtain 50 coordinates.

1. A (2023/11/06)

Hello, in the official website example, the entire point cloud is used, and then the spraying path is calculated based on the on-site requirements, rather than directly using a specific point from the point cloud as the path pose.

For more information, please contact out technical support.

BRTIRUS3511A Robot Configuration with One-Four Axis Offset Parameters

2. Q (2023/11/04)

  • Software Version: Mech-Viz 1.7.4
  • Robot Model: BORUNTE BRTIRUS3511A

When applying the one-four axis offset, adding the Y-offset parameter doesn’t correct the robot’s posture.

Even with the addition of the seventh DH parameter, the issue still can’t be resolved.

Furthermore, there is no information on this part in the [robot]_algo.json file’s parameter explanation. Can this be added?

2. A (2023/11/04)

Hello, based on the robot’s configuration, it belongs to a special configuration rather than a standard six-axis robot. Therefore, a robot model should be created according to this special configuration.

PRO M Camera Internal Imaging Calculation

3. Q1(2023/11/06)

Hello,

  1. When the PRO M camera captures 2D images and depth maps, what internal calculations are involved?
  2. Mech-Eye Viewer can display 3D calculations, including Surface Smoothing, Outlier Removal, and Noise Removal. What is the sequence of execution for these three steps, and are there any other steps besides these?
  3. After sampling the 2D image, are there any image processing steps? If so, what is the sequence for these steps?

3. A1 (2023/11/07)

Hello,

  1. For 2D image capture with SDK, once the camera captures a 2D image, it undergoes lens distortion correction using the camera’s internal parameters.
    As for obtaining the depth map, a sequence of structured light-encoded images is captured, and then the corresponding spatial positions are calculated based on the relationships between corresponding points and the calibrated camera parameters.
    You can refer to the “Sequential encoding” chapter in 3D optical measurement technology: Triangulation (passive binocular measurement and structured light) for more information.

  2. Currently, there are Surface Smoothing, Noise Removal, Outlier Removal, and Stripe Contrast Threshold settings. The Guru level includes Edge Preservation and Minimum Fringe Intensity Threshold.
    You can adjust these settings to handle specific point cloud scenarios or to preserve more details.
    For detailed explanations, please refer to the Mech-Eye Viewer software’s functionality overview.

  3. The three mentioned steps can be enabled or disabled independently, used individually, or combined according to specific requirements. There is no fixed sequence of execution.

  4. After capturing 2D images with SDK, lens distortion correction is applied to the images using the camera’s internal parameters.

3. Q2 (2023/11/07)

Thank you for your response,

My understanding is that the depth map is a single-channel 1920x1200 image that only stores the depth information of Z.

If the processing of the depth map in the camera includes handling point cloud data, where is the X and Y information of the point cloud stored?

3. A2 (2023/11/08)

Hello, what you’re referring to here is the method of converting a depth map into point clouds. You can refer to: Depth map to point cloud conversion sample (C++) (with diagrams and formulas).

3. Q3 (2023/11/08)

I’d like to know:

  1. Does the depth map only store information about the Z depth?
  2. When are the X and Y data of the point cloud generated?
  3. In Mech-Eye Viewer’s depth map pixels, do they contain 3D X and Y information? Is the point cloud generated inside the camera after capturing the depth map, or is it generated on the local PC?

3. A3 (2023/11/08)

  1. Yes, the depth map only stores depth information.

  2. For 2 and 3, the depth map is generated on the camera side and transmitted to the PC. Using the intrinsic parameters, the depth map converts pixel coordinates into point cloud XY (see the link above for details). The point cloud data is calculated after collecting the fringe image data.

3. Q4 (2023/11/08)

Understood.

So, in Mech-Eye Viewer, when are the parameters for point cloud processing applied?

3. A4 (2023/11/08)

After obtaining the raw point cloud, it undergoes post-processing based on the parameters for point cloud processing.

3. Q5 (2023/11/08)

Can I understand that the imaging steps for the depth map only include projection + fringe decoding + transmission?

Camera-side calculation Local PC calculation
2D Capture + Image correction + Transmission
Depth map Projection + Fringe decoding + Transmission Generate raw point cloud + Point cloud processing (using Mech-Eye Viewer point cloud processing parameters)

3. A5 (2023/11/08)

Yes, all these operations are performed on the camera side, and the results are finally transmitted to the PC.

Two Robots Collaboratively Picking the Same Workpiece

4. Q (2023/11/07)

If the workpiece to be picked is very long, for example, around 5 meters, it requires the simultaneous picking by two robots. Vision calibration is performed between one of the robots and the vision system, identifying and mapping two picking points, and communicating with the PLC to provide two picking points and labels. The PLC then sends these labels to the two robots separately. However, there is also a need for positional transformation between the two robots.
In this situation, how is collaborative operation typically achieved, and are there any special requirements for the involved robots?

4. A (2023/11/08)

  1. There are two methods:

    1. Establish a common world reference frame between the two robots. Get the poses in this world reference frame, add labels, and then send them to the PLC. If the robots support simulation of two robots working together, you can perform collision detection between the two robots. However, it is generally advisable to minimize the risk of collisions between the robots.
    2. Individually calibrate the camera’s extrinsic parameters to each of the two robots. Use two separate programs, one for capturing images and the other for processing the images, to call their respective extrinsic parameters. Calculate the corresponding poses separately and send them to the PLC.
  2. Usually, major robot brands support collaboration between two robots, but some robots may not (such as Jaka). Although the PLC can send coordinates to both robots simultaneously, it cannot perform collision detection between the robots.

Mech-Vision Point Cloud Normal Requirements

5. Q (2023/11/08)

Hello,

To give an example: the Steps “Down-Sample Point Cloud” and “Point Cloud Clustering” may not necessarily require the use of normal information to perform downsampling.

However, due to the point cloud format requirements of these Steps, unnecessary time must be spent calculating normals.

I would like to inquire why normal information is mandatory for point cloud Steps in Mech-Vision that may not necessarily need it?

5. A (2023/11/09)

Generally, for recognition and most point cloud processing tasks, normal information is indispensable (both downsampling and clustering algorithms often rely on normal information).

Therefore, to minimize changes to the interface, when designing the Steps, the point cloud type was set as PointNormal.

Depth Map Retrieval for Specified Pixel Depth Information

6. Q1 (2023/11/09)

Could you please provide information on how Mech-Mind encodes or processes depth images? How can I convert the depth values obtained from the image below into real depth values?

6. A1 (2023/11/09)

Hello,

You can use Mech-Eye API to obtain and process depth maps. For instructions on running Mech-Eye API Python routines on Windows, please refer to: Python (Windows).

For the data types obtained through the Mech-Eye API, please see: Image data format in Mech-Eye API.

If you choose not to use Mech-Eye API, it is recommended to save the depth map in TIFF format using Mech-Eye Viewer. The saving results will be consistent with Mech-Eye API, and you can refer to the link provided above regarding Mech-Eye API’s output data format introduction.

If Mech-Vision is your only option for saving, you can refer to this Q&A: How to convert a depth map saved in Mech-Vision’s 32-bit PNG image format with a grid into the TIFF format image as saved by Mech-Eye Viewer.

6. Q2 (2023/11/09)

I’m sorry, but I couldn’t resolve this issue.

captured and saved the depth map using Mech-Eye Viewer. Now, I need to write Python code to read the saved depth map and obtain the depth value at a specified pixel position.

6. A2 (2023/11/09)

It is recommended to directly use Mech-Eye API, which includes a complete process from obtaining a depth map to generating a point cloud (there is a corresponding Python routine). You can find an example here: Depth map to point cloud conversion sample (C++) (with diagrams and formulas).

Refer to the latter part of the document for methods to read the depth value at a pixel point. Specific implementation requires following the instructions in the document: Python (Windows).

6. Q3 (2023/11/10)

I referred to this code but couldn’t determine the storage format of the depth map. I want to obtain the depth value at a specified pixel without invoking the API.

6. A3 (2023/11/10)

Mech-Eye Viewer saves depth maps in TIFF format, which is a 32-bit floating-point single-channel format solely preserving depth data. For more details, refer to this link: Image data format in Mech-Eye API

Additionally, this method does not require any additional format conversion. You can try the example below:

6. Q4 (2023/11/10)

Thank you; I can now read the depth information of pixel points from the TIFF file.

If I save it in PNG format, how can I read the depth information of pixel points from a PNG depth image?

6. A4 (2023/11/10)

Reading depth information from PNG-formatted depth maps requires data conversion: How to convert a depth map saved in Mech-Vision’s 32-bit PNG image format with a grid into the TIFF format image as saved by Mech-Eye Viewer.

It is advisable to use the TIFF format. PNG format is specific to Mech-Vision software, and the commonly used method for storing depth maps is in TIFF format.

Mech-Viz: Replicating Processes Between Different Projects

7. Q1 (2023/11/09)

Software Version: 1.7.4

Question: How can certain processes from one Mech-Viz project be replicated to another?

7. A1 (2023/11/09)

  1. Place all desired process components into the same Procedure.

  2. Right-click on the Procedure and select “Export Selected Procedure to File”, as shown in the image below.

  3. Open another project, right-click in an empty space, and choose “Import Procedure From File”, as shown in the image below.

7. Q2 (2023/11/10)

Thank you for the information. The method works, but I noticed that after importing fixed movement positions from Project A to Project B, the data changes. What could be the reason for this?

Current software version: 1.7.4

7. A2 (2023/11/10)

Apologies for the inconvenience. Indeed, certain step parameters cannot be retained.

This issue will be addressed in the upcoming 1.8 version, and it will be resolved upon its release.