AI-translated latest Q&A posts' collection (2023/09/09–2023/09/15)

This post compiles the latest questions and answers from the Q&A category translated by AI. The questions were posted between September 9, 2023, and September 15, 2023.
Please note: AI has its limitations, and specific content should still be discerned based on the actual context.

Mech-DLK Software Training Iteration Error

1. Q(2023/09/08)

In the previous few iterations, the model training was proceeding smoothly. However, an error occurred during the third iteration, and the error message is as follows:

Traceback (most recent call last):
File “DetTrain.py”, line 6, in train(config, ‘…/configs’, /model zoo’) File “”, line 68, in train
File “”, line 112, in single_gpu_training File “”, line 67, in mmind train detector File “”, line 39, in run
File “C:\Program Files\Python36\lib\site-
packages\mmcv\runner\epoch_based_runner.py” line 50, in train
self.run_iter(data batch, train_mode=True,**kwargs)
File “C:\Program Files\Python36\lib\site-
packages\mmcv\runner\epoch_based_runner.py”. line 30, in run_iter**kwargs)
File "C:\Program Files\Python36\lib\site-
packages\mmcv\parallel\data_parallel.py”, line 67, in train_step
return self.module.train_step(*inputs[0],**kwargs[0])
File "C:\Program Files\Python36\lib\site-
packages\mmdet\models\detectors\base.py”, line247, in train_step
losses = self(**data)
File “C:\Program Files\Python36\lib\site-
packages\torch\nn\modules\module.py”, line 1051, in _call impl
return forward_call(*input, **kwargs) File “C:\Program Files\Python36\lib\site-
packages\mmcv\runner\fp16 utils.py”, line 98, in new func
return old func(*args, **kwargs)
File “C:\Program Files\Python36\lib\site-
packages\mmdet\models\detectors\base.py”, line181, in forward
return self.forward train(img, img_metas,**kwargs)
File “”, line 121, in forward train File “”, line 24, in forward train
File “”, line 31, in _mask forward train File “”, line 70, in mask point forward train
File “”, line 13, in get targets
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other APl call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA LAUNCH BLOCKING=1.

Currently, we haven’t been able to identify the root cause. Please help us analyze this issue!

1. A(2023/09/09)

This appears to be a problem related to the graphics card and its memory (VRAM).

We recommend upgrading both the training machine’s specifications and the Mech-DLK version. During local testing, we successfully trained the model using Mech-DLK version 2.4.2 on a machine equipped with an NVIDIA GeForce RTX 2060 graphics card (6GB VRAM).


V4 DEEP or LSR, when using an external camera to capture color images, if the Step alarms inconsistent image size, how can this issue be resolved?

2. Q(2023/09/11)

For V4 DEEP or LSR, when using an external camera to capture color images, what method can be used to resolve the issue when an alarm indicates that the image sizes are inconsistent?

2. A(2023/09/11)

For LSR series and DEEP V4 cameras, when selecting “External Color Image” for the camera’s 2D image, please take note of the following:

  1. The issue you’re encountering may be due to not selecting the Rectify to Depth Map option in the “Capture Images from Camera” property when using the highest depth image in your project without this correction. Enabling Rectify to Depth Map can resolve this problem.

  2. When you choose to enable Rectify to Depth Map in the “Capture Images from Camera” property, if there are missing parts in the depth image, the output color image from the camera will also have missing parts, as shown in the image below:

  3. Whether to enable Rectify to Depth Map depends on the following scenarios:

    • In cases where the point cloud is well-defined, and you need to obtain the corresponding color image using the highest depth image, it is recommended to enable this option (though in this case, you can also first obtain the highest-level point cloud and then project it to a 2D image without enabling Rectify to Depth Map, as shown below).

    • For some highly reflective workpieces where depth information is required even in the absence of depth data, it is recommended not to enable this option. Enabling it may lead to missing color information, which could affect the performance of deep learning recognition.


Converting 2D Calculation Results into 3D Pose

3.1 Q(2023/09/09)

How can I combine the results of 2D segmentation to obtain 3D pick points?

For example:

  1. Process 2D image to obtain segmentation regions.
  2. Segment region processing: Calculate the center of the segmentation, and set the long axis as the orientation.
  3. Obtain 3D pose XYZ: Correspond the 2D center pixel to the point cloud to obtain XYZ position.
  4. Obtain 3D pose RPY: [How to convert 1D 2D orientation into 3D pose RPY?]
  5. Combine and transform 3D poses.

3.1 A(2023/09/09)

You can combine them into 3D pick points using the following methods:
Method 1: Use the Step “Compose New Poses by Combining Parts of Input Poses”.

Method 2: Use the Step “Decompose Poses to Quaternions and Translations”, then use the Step "Compose Poses from Quaternions and Translation Vectors”.

3.2 Q(2023/09/11)

Thank you for the reply. I have a few more questions after reviewing your response.

  1. “Convert 2D Poses to 3D Poses” is not found in Mech-Vision or the documentation. Is this a version issue?
  2. Both of the solutions you provided involve the Step “Calc Poses and Dimensions from Planar Point Clouds”. Are there alternative steps? For instance, directly converting 2D pixel points into 3D point clouds.

3.2 A(2023/09/12)

  1. You can find the Step “Convert 2D Poses to 3D Poses” in the Step Library by enabling developer mode in the settings (you’ll need to restart after enabling). Please note that you should avoid enabling developer mode in a production environment, as it may lead to issues like multiple instances of Mech-Vision software being opened, causing errors such as project not registered, etc.


  2. Currently, there is no direct Step for converting 2D pixel points into 3D point clouds.


Map to Multiple Pick Points

4. Q(2023/09/11)

After the Step “Map to Multiple Pick Points”, why are there two pose coordinates? What could be the issue?

Figure 1, Pose coordinates of the pick points:

Figure 2, Two pose coordinates appear after mapping:

4. A(2023/09/12)

Hello, the reason for the issue might be that when using the template matching and pick point editor, the geo. center point was also set as a pick point, resulting in two pick points in the pick_points file (you can deselect by right-clicking on the corresponding pose).
image


Coarse Matching V2 Matching Errors

5. Q(2023/09/12)

Mech-Vision V1.7.2

We are testing algorithms using a virtual camera, and the image paths are already specified. The following two images are of the same type of wheel hub. Some match successfully, while others do not.

For those that do not match successfully, how can we identify the reasons?

Successful Match:

Match Failure:

5. A(2023/09/12)

  1. For the point cloud in the images, coarse matching can be done using the workpiece’s edge point cloud template to ensure the accuracy of coarse matching.

  2. For fine matching, if the stereo features of the workpiece captured by the camera are not clear, it is recommended to use the edge point cloud for matching. If the camera captures clear stereo features of the workpiece, for accuracy, you can also use the full point cloud template for matching.


On a computer with a GPU, the dlkpack model file in the deep learning management tool only displays the CPU option for hardware type, not the GPU option.

6. Q(2023/09/13)

Issue:
On computer A, in the deep learning management tool, the dlkpack model file only displays the CPU option for hardware type and does not show the GPU option, even though this computer has a dedicated graphics card.

However, the same dlkpack, on computer B, shows both CPU and GPU options.

Software Version: 1.7.4

6. A(2023/09/13)

In Mech-Vision 1.7.4, the deep learning model package manager detects the model of your hardware to determine which options to display:

Display conditions are as follows:

  • CPU: Displayed when an Intel-brand CPU is detected.
  • GPU(x2): Displayed when an NVIDIA dedicated graphics card is detected, and the graphics card driver version is higher than 472.5.

Therefore, the phenomenon on computer A is due to a low graphics card driver version. Upgrading the driver should resolve the issue.


Standard Interface for Receiving Points

7. Q(2023/09/12)

I’ve been working on a robot program recently, and I need to process received waypoint data. The waypoints generated by the standard interface can only fix the number of decimal places, but cannot fix the number of integer digits. This means that when the robot is parsing the data, it needs to parse it one by one.

Is it possible to fix the number of integer digits, ensuring that each value has a fixed number of digits? For values that are not long enough, can we add leading zeros to compensate for the digits? This would allow us to directly extract the complete waypoints when processing the data, making it easier to write robot programs.


For example, take the following waypoints:

205, 2100, 1, 1, 1, 550.2728, -724.0622, 174.9187…

If we add leading zeros for the values that are not long enough, ensuring that the integer part, the decimal point, and the decimal part together are always 9 digits:

205, 2100, 1, 1, 1, 00550.272, -0724.062, 00174.918…

7. A(2023/09/13)

Hello,
The received result is in string format, and you can refer to the code below to parse it by splitting it at commas. Parse the entire string, and then process the parsed visual waypoint data.

	LOCAL VAR string comma_str = ""
	LOCAL VAR string comma_previous_str = ""
	LOCAL VAR string comma_after_str = ""
	LOCAL PROC SplitComma()
		comma_previous_str = ""
		comma_after_str = ""
		int str_len = StrLen(comma_str)
		IF(str_len <= 0)
			RETURN
		ENDIF
		int comma_pos = StrFind(comma_str,1,",")
		IF(comma_pos>str_len)
			comma_previous_str = comma_str
		ELSE
			comma_previous_str = StrPart(comma_str,1,comma_pos-1)
			IF(comma_pos<str_len)
				comma_after_str = StrPart(comma_str,comma_pos+1,str_len)
			ENDIF
		ENDIF
		comma_str = ""
	ENDPROC

Here’s an example program for parsing the value 205:

	GLOBAL PROC MM_Get_VizData()
		Last_Data  = 0
		Pos_Num = 0
		VisPos_Num = 0
		Print("MM: Get Mech-Viz Data")
		IF(Jps_Pos != 1 && Jps_Pos != 2)
			ERRNO = ROBOT_ARGUMENT_ERROR
			GOTO ERROR_Get_VizData
		ENDIF

		Init_Data()
		ByteSend = IntToStr(CMD_GetViz_Data) + ","
		ByteSend = ByteSend + IntToStr(Jps_Pos)
		SocketSendStringLine(ByteSend,socket0)
		ByteRecv = SocketReadString(MM_Timeout,socket0)
	
		// split MM_CMD
		comma_str = ByteRecv
		SplitComma()
		MM_CMD = StrToDouble(comma_previous_str)
		ByteRecv = comma_after_str
		// split MM_Status
		comma_str = ByteRecv
		SplitComma()
		MM_Status = StrToDouble(comma_previous_str)
		ByteRecv = comma_after_str
		IF(MM_CMD != CMD_GetViz_Data)
			ERRNO = ROBOT_CMD_ERROR
			GOTO ERROR_Get_VisData
		ENDIF
		IF(MM_Status != 2100)
			Check_Status()
			RETURN
		ENDIF
		// split Last_Data
		comma_str = ByteRecv
		SplitComma()
		Last_Data = StrToDouble(comma_previous_str)
		ByteRecv = comma_after_str
		// split Pos_Num
		comma_str = ByteRecv
		SplitComma()
		Pos_Num = StrToDouble(comma_previous_str)
		ByteRecv = comma_after_str
		// split VisPos_Num
		comma_str = ByteRecv
		SplitComma()
		VisPos_Num = StrToDouble(comma_previous_str)
		ByteRecv = comma_after_str
	
		// split MM_Pose Struct 
		// Tips: If below line throw error,please:
		// - Open Mech-Center->Deployment Settings->Mech-Interface->Advanced Settings.
		// - Modify "Max num. of poses sent each time" smaller.
		string str_array[Pos_Num*8] = StrSplit(ByteRecv,",")
	
		int counter = 0
		int offset = 0
		while(counter < Pos_Num)
			offset = counter * 8
			counter += 1
			IF(Jps_Pos == 1)
				MM_Pose_Joint[counter].J1 = StrToDouble(str_array[offset + 1])
				MM_Pose_Joint[counter].J2 = StrToDouble(str_array[offset + 2])
				MM_Pose_Joint[counter].J3 = StrToDouble(str_array[offset + 3])
				MM_Pose_Joint[counter].J4 = StrToDouble(str_array[offset + 4])
				MM_Pose_Joint[counter].J5 = StrToDouble(str_array[offset + 5])
				MM_Pose_Joint[counter].J6 = StrToDouble(str_array[offset + 6])
			ELSE
				MM_Pose_Pos[counter].X = StrToDouble(str_array[offset + 1])
				MM_Pose_Pos[counter].Y = StrToDouble(str_array[offset + 2])
				MM_Pose_Pos[counter].Z = StrToDouble(str_array[offset + 3])
				MM_Pose_Pos[counter].A = StrToDouble(str_array[offset + 4])
				MM_Pose_Pos[counter].B = StrToDouble(str_array[offset + 5])
				MM_Pose_Pos[counter].C = StrToDouble(str_array[offset + 6])
			ENDIF
			MM_Pose_Label[counter] = StrToDouble(str_array[offset + 7])
			MM_Pose_Speed[counter] = StrToDouble(str_array[offset + 8])
		endwhile
	ENDPROC

How to Accurately Adjust the Position of External Scene Objects in Mech-Viz?

8. Q(2023/09/13)

After adding external scene objects, how can I accurately adjust their positions within the external scene? What is the distance between these objects and the robot’s coordinate position? Are there any tools available for measurement?

8. A(2023/09/13)

Hello, and thank you for your feedback. We have received your feedback.

We apologize for the inconvenience, but currently, the software does not have built-in measurement tools available. However, this feature is already in our development pipeline and is expected to be implemented in upcoming versions.


Depth Map Information

9. Q(2023/09/14)

What is the meaning of the individual bytes of data in the 32-bit depth map? Is there a more detailed explanation of the file’s data storage format?

9. A(2023/09/14)

Hello, you can check the content in the following link to see if it can address your question:


When Mech-Vision is minimized to the taskbar, there is an occasional increase in the runtime of the Step “3D Coarse Matching V2”. How can this be resolved?

10. Q(2023/09/14)

  • Software Version: 1.7.4
  • Relevant Issue: Longer processing time for the same data and steps:
  • Computer Specifications: 12th Gen Intel(R) Core™ i7-12700
  • Operating System Version: Windows 10

Attempted Solutions:

  • Enabled High-Performance Power Mode - Ineffective.
  • Closed Other Software - Ineffective.

10. A(2023/09/14)

Solution:
To achieve faster processing, it is recommended that the customer upgrade to Windows 11 and change the computer’s power mode to High Performance. In most cases, it is advisable to disregard this difference.

Issue Analysis:
The customer’s computer uses an Intel 12th Gen heterogeneous CPU, which includes performance cores and efficient cores (commonly known as big and small cores):

Due to the presence of heterogeneity, for better CPU scheduling, Intel’s official statement is that it is necessary to use Windows 11; Windows 10 only maintains compatibility.


Combining Poses

11. Q(2023/09/15)

Input two poses.
Taking the first pose as the reference point, and considering the line between the first pose and the second pose as the X or Y direction, how should one combine them to create a new pose pointing towards the second pose? :smiley:

11. A(2023/09/15)

Hello, based on your description, to achieve your requirement, you can align the X-axis of the first pose with the second pose. You can use the Step “Point Poses to Reference Positions” to do this.


How to Adjust the Pose of the Tool

12. Q(2023/09/15)

  1. In the model, a tool has been added, but the direction of the tool is reversed. How can I adjust the position and angle of the tool?
    Modifying the “Tool Configuration” by rotating around X, Y, or Z doesn’t seem to make a difference. What should I do?

  2. After dragging the tool, the robot’s pose has changed. How can I get the robot back to its initial pose?

12. A(2023/09/15)

Hello,
For the first question: To adjust the pose of the tool, you can double-click on the OBJ model in the bottom left corner and then make modifications in the dialog box.

For the second question: You can adjust the robot’s pose in the dialog box on the right side.