AI-translated latest Q&A posts' collection (2023/09/23–2023/09/27)

This post compiles the latest questions and answers from the Q&A category translated by AI. The questions were posted between September 23, 2023, and September 27, 2023.
Please note: AI has its limitations, and specific content should still be discerned based on the actual context.

Laser L Camera Projection Unit Displays a Temperature of -11°C with No Depth Map Output; Actual On-site Temperature: 26.7°C

1. Q (2023/09/24)

  • Software Version: Mech-Eye Viewer 2.1.0
  • Camera Model and Firmware Version: Laser L WAM3021CA3000767
  • Issue Description: Mech-Eye Viewer software displays the message, Temperature of laser -11 is out of safe range (-10 ~ 55). Camera will decrease laser power for safety issue!


1. A (2023/09/25)

Hello, the issue you’ve reported is related to a camera temperature warning.

We have contacted you to arrange for it to be sent in for factory repair.

How to Use “Classify” in Mech-Viz

2. Q (2023/09/25)

Desired Functionality: Based on the labels sent by Mech-Viz, Mech-Viz should follow different planning paths.

There will be an error if you follow the settings in the above screenshots. What is the correct way to configure it?

2. A (2023/09/25)

Regarding “Classify”, you can refer to:

From the screenshots you provided, there doesn’t appear to be any errors in the software at the moment.
You can confirm whether the parameter in the image below is selected:

Significant Pick Point Deviation

3. Q (2023/09/27)

Figure 1

Figure 2

Figure 3

When the workpiece aligns with the teach pendant’s angle, the camera captures the image, and the robot’s grasp is very accurate, as shown in Figure 1.

However, when there is a significant difference in angle between the workpiece and the model, the robot’s grasping pose becomes inaccurate and deviates significantly, as shown in Figures 2 and 3.

What are the possible reasons for this phenomenon?

3. A (2023/09/27)

Hello, based on the illustrated situation above, it is highly likely that this issue is related to the robot’s precision. We recommend checking the robot’s precision, especially its rotational accuracy.

In addition, this situation may also be related to robot calibration. I also recommend checking the relevant settings.

How to Use the Step “Detect Edges” in Mech-Vision?

4. Q (2023/09/26)

Within Mech-Vision, there is a Step for edge detection.

The two detection types, CannyDetector and Edgedrawing detector, lack explanatory information.

There is also no detailed information available here:

Could you please clarify whether it is still recommended to use this Step?

  1. If it is still recommended, how should it be utilized?
  2. If it is no longer recommended, are there alternative Steps?

4. A (2023/09/26)

Currently, there is only one Step for 2D edge detection, and it is recommended for use.

Both of these algorithms below are well-established. For detailed explanations, you can refer to the content in the provided link:

  1. If the lighting conditions are stable, we recommend using the Canny algorithm.
  2. However, if there is significant variation in lighting, it is advisable to use the EdgeDrawing algorithm.

Mech-Vision Naming Issue

5. Q (2023/09/24)

Is it not allowed to use the format “xxx-xxxxx-xxxx” for naming within Mech-Vision? Are there any byte length limitations?
Currently, when using this naming format in Mech-Vision, Mech-Viz cannot receive visual results. Changing the name to xx-xxxxxx resolves the issue.
(Here, naming refers to project naming)

5. A (2023/09/26)

Currently, there are no character length restrictions imposed on project naming within the software. It’s only necessary to ensure that the absolute path corresponding to the project content stays within the maximum character limit imposed by Windows.

Regarding the issue you’ve described, I wonder if you are consistently faced with this problem? If so, please provide us with some relevant information, such as Mech-Vision, Mech-Viz, and Mech-Center logs, screenshots when the issue occurs, or the project file that can reliably reproduce the problem.

My material is a rectangular aluminum block, and I want to distinguish between the two faces: the one with length times width and the one with length times height. How should I set up the project?

6. Q1 (2023/09/27)

I want to classify using point cloud dimensions, but it asks me to input a reference dimension. How should I input this data port?

6. A1 (2023/09/27)

Hello, you can input the dimensions by “Read Object Dimensions”.

6. Q2 (2023/09/27)

The shape of the aluminum block is (Length: 211mm, Width: 40mm, Height: 30mm).

The size settings for reading the material are:

The parameter settings for classifying based on point cloud size are:

The result of “Calc Poses and Dimensions from Planar Point Clouds” is displayed as:

Hello, even after setting it up this way, I still cannot differentiate the two point clouds based on size. Why is that?

6. A2 (2023/09/27)

  1. We recommend modifying Length on Z-axis of Read Object Dimensions to 1 mm. You can see that the height of the object size calculated using the Calc Poses and Dimensions from Planar Point Clouds step is 0.001 m.
  2. Increase Length Difference Threshold and Height Difference Threshold in the Classify Point Clouds by Dimensions step. You can classify the two faces based only on the Width Difference Threshold.

From the data, the size difference is around 10 mm. If you find it difficult to adjust the parameters for Classify Point Clouds by Dimensions, you can differentiate using the following method:

First, extract the point cloud using deep learning masks, calculate the pose and size of the planar point cloud:

Then, decompose the size and extract the Y value. Compare it with the threshold you set, as shown in the image:

Based on this value, you can distinguish the sizes and filter the point cloud extracted by the mask to obtain two different point clouds. The overall workflow is as follows:

Note that for “Filter”, one of them should be set to negate:

Detecting the Specific Position of Table Corners Relative to the Robot Base Coordinate System

7. Q (2023/09/27)

The camera is mounted at the robot’s end-effector, and I want to detect the specific position of the corners of this table relative to the robot’s base coordinate system. After using the “Detect Corners” step, I’m not sure how to proceed with the following steps.

7. A (2023/09/27)

Hello, you can consider the following approaches:
1. 3D Method

  1. Extract the point cloud of the upper surface. You can refer to this document: Extract Planar Point Clouds.
  2. Transform the scene point cloud to the pose of the planar point cloud, then perform orthogonal projection. After orthogonal projection, detect lines and calculate the intersection points of the two lines to obtain the corresponding corners.
  3. You can refer to the attached project:

    VIS-Detect Intersection between Two Line (6.6 KB)

2. 2D Method

  1. If you want to calculate corners directly from a 2D image, it is recommended to enhance the visibility of the workpiece edges by using lighting techniques. Then, detect lines to find the intersection points.