AI-translated latest Q&A posts' collection (2023/09/16–2023/09/22)

This post compiles the latest questions and answers from the Q&A category translated by AI. The questions were posted between September 16, 2023, and September 22, 2023.
Please note: AI has its limitations, and specific content should still be discerned based on the actual context.

Theory TCP vs. Actual TCP Discrepancy

1. Q1 (2023/09/18)

In actual projects, especially in high-accuracy scenarios and when grasping one object.

Due to the diversity in the incoming directions of workpieces, it often happens that for workpieces that require non-symmetric grasping from 360° due to the discrepancy between theoretical TCP and actual TCP, after the robot initially grasps the point and attaches a label, there is a significant deviation in the grasping point from the last pasted label position when changing the direction of the workpiece for recognition and grasping.

How can we solve this problem?

1. A1 (2023/09/18)

Based on the content above: The on-site TCP accuracy is insufficient, and it compensates for grasping accuracy using the teach method;

  1. The teach method compensates for the robot’s absolute accuracy error, TCP error, and extrinsic parameter error. This method is suitable when the incoming direction of workpieces does not vary significantly; you can try to limit the variation in the incoming direction of workpieces.
  2. If you cannot limit the incoming direction of workpieces, you can create different templates for workpieces with different incoming directions and use angle-based recognition within the project to call different templates.

1. Q2 (2023/09/18)

  1. On-site, we used the drag-and-drop method to create templates, and the difference between TCP and the actual gripper rotation center is likely due to errors in gripper manufacturing.
  2. We certainly cannot restrict the incoming direction of workpieces, such as when they are delivered in random or mixed patterns.

1. A2 (2023/09/18)

  1. When the on-site TCP is inaccurate, using the drag-and-drop method can indeed result in grasping deviations. I would recommend teaching a precise TCP on-site instead. This way, you won’t need to teach multiple templates, and the operation will be simpler.
  2. You can also try the second method, using different templates for different angles: first match and determine the incoming direction, then match again and call the appropriate angle template. Below are some reference ideas for project setup.

For scenarios with mixed patterns, you can also categorize them by angle.

1. Q3 (2023/09/18)

Currently, I also determine based on the incoming angle of workpieces, but it involves pose compensation. There are eight compensations for four front and four rear directions for one product, and there are dozens of products… However, the accuracy is highest at these four angles, and the deviation increases as the angle deviates.

Even if I use your method of creating different grasping point templates for different directions, the workload seems significant, and it might be more complicated than compensating for angles. Moreover, it’s challenging to position the drag-and-drop accurately.

1. A3 (2023/09/18)

When there are many different product types on-site, it’s advisable to start with precise TCP calibration. This will make maintenance easier in the long run.

At the same time, I suggest performing on-site checks for the robot’s absolute accuracy, camera extrinsic parameter accuracy, and minimizing errors as much as possible.

Tuning Analysis for Fine and Coarse Matching

2. Q (2023/09/18)

In Mech-Mind vision system, the usage of fine and coarse matching Steps in practical projects is quite significant, accounting for almost the majority of projects. However, despite being such a useful feature, when it comes to actual usage, the internal parameters often make it difficult to get started. Although there is some information provided in the step-by-step instructions, it still feels somewhat lacking. Some parameters are not well understood without visual aids, and they may not be as intuitive as they could be without videos.

What adjustments should be made to achieve more suitable parameters?

2. A (2023/09/18)

Hello, in addressing the complexity of adjusting matching parameters, you can consider using the user-friendly version of the Step, which will make it easier for you to grasp the tuning effects;

We highly recommend using the Step “3D Workpiece Recognition”, as it is more convenient to use, and the explanatory parts are more understandable.
image

In addition, you can refer to the parameter explanations in the user manual from within the software. Currently, the content regarding parameters in the manual is being optimized, with more detailed parameter usage guidance to be included in version 1.8.0.

Firmware Upgrade Failure, Prompting “Firmware Upgrade File Not Found. Please Confirm if the Selected File or File Path Contains updateList.json File.”

3. Q (2023/09/19)

When attempting to upgrade the camera firmware using Mech-Eye Viewer V210, it shows that the firmware upgrade file does not exist. However, it does exist in reality. I’ve tried reinstalling the software, but it hasn’t worked.


3. A (2023/09/19)

Hello, and thank you for your feedback.

Regarding the current issue, it is now understood that this is a problem with the software version. It has already been fixed and will be included in the latest release version 2.2.0.

A temporary solution is to change the installation path of the software to avoid using Chinese characters; this should resolve the problem.

Project Plan Needs Zero Error Rate Project Progress

4. Q1 (2023/09/18)

In projects that require precise identification, various reasons may lead to a situation where the project cannot afford recognition errors. How should such projects be managed to ensure smooth progress and remain within a controllable scope?

In most cases, clients in the early stages of the project do not have a good understanding of 3D vision and do not implement necessary error prevention measures. They assume that there won’t be any errors, leading to a project plan design without thorough testing.

Examples:

In the carton depalletizing project:

  1. Due to deep learning recognition errors, one carton is recognized as two, leading to insecure grasping, and the carton falls from the air, causing significant economic damage.
  2. Measurements of the length and width of carton are taken, and precise positioning is done based on these measurements. Similar to the first issue, recognition errors can cause significant losses.
  3. Even if initial testing is stable, how should these issues be addressed during later production?

In the metal part loading project:

  • In cases where there is no secondary positioning in the backend process, grabbing and placing is done directly, often requiring high recognition accuracy. If significant recognition deviations or direct recognition errors occur, it can lead to damage to downstream equipment.

4. A1 (2023/09/19)

Hello,

  1. Regarding the issue of abnormal deep learning recognition, we have collected a large amount of data, and the supermodel is improving its ability to recognize carton through iterative optimization.
    Additionally, the Step “Validate Box Dimensions” can be used to prevent errors in the recognized dimensions, as shown in the figure:
    image

  2. For high-precision metal part loading projects, the first priority should be to ensure the engineering recognition accuracy. Secondly, potential unexpected situations should be considered in the project, and corresponding error prevention measures and alarm information should be implemented.

For projects, it is essential to assess project risks based on past experience in the early stages and develop corresponding contingency plans. When the client lacks in-depth understanding, effective communication should be maintained, and questions should be actively raised to mitigate risks.
Furthermore, during the project implementation, various error prevention alarms, especially for known risks, should be established and reminded.

4. Q2 (2023/09/19)

Carton is usually of unknown dimensions or comes in various size types. In cases like this, the known steps cannot effectively implement error prevention, such as using thresholds, which cannot completely address the current segmentation anomalies mentioned above, resulting in incorrect grasping or incorrect dimensions.

In reality, aside from implementing error prevention measures at the initial project design stage, there seems to be no other solution to this root issue.

4. A2 (2023/09/19)

For situations where the carton dimensions are unknown, if the entire stack consists of the same SKU, you can consider modifying the parameters:
Use the “UnknownBoxDimension” option.

With this approach, when there are significant differences in the recognized dimensions in each recognition, it will actively filter or raise an alarm for safety. While it may not solve the problem entirely, it adds a certain level of safety.

It can address scenarios where: “In a single photo, multiple cartons are recognized with inconsistent dimensions that exceed the threshold.”

However, it cannot currently address scenarios where: “In a single photo, all cartons are recognized incorrectly, but their dimensions are consistent,” such as “having only one carton but it is mistakenly split into two.”

Synchronization of Mech-Viz Motion and Communication

5. Q1 (2023/09/19)

Software Version: Mech-Mind Software Suite-1.7.4

I’ve created a Mech-Viz project and now I need to send a message to the Adapter after the robot reaches the pick-up point using a notification. In actual testing, it was observed that the External Communication starts sending messages right when the robot begins moving towards the Pick Point. Is there a way to block the Pick Point part and execute the External Communication only after the robot reaches its destination?

5. A1 (2023/09/19)

Normally, the motion and communication in Mech-Viz are synchronized. Could you please provide the project and logs for your reference?

Additionally, have you selected the “Precise Arrival” option in the Pick Point step properties? You can also try adding a 0.2s wait step after the pick-up point step for testing.

5. Q2 (2023/09/20)

Hello, perhaps my description of “synchronization” wasn’t accurate. What I meant was that I would like the Pick Point step to block and only execute the External Communication after the robot reaches its destination. I’ve already uploaded the project.
pick_place.zip (7.4 MB)

“Precise Arrival” should be an attribute in Visual Recognition. During my experiments, I didn’t use Visual Recognition; I only had motion steps and notification steps.

5. A2 (2023/09/20)

I need to confirm one thing: are you using the master control mode or non-master control mode?

  1. In the master control mode: I’ve tested your project, and synchronization works correctly.
    If you find that your actual robot is not synchronized with the software, it might be due to the following reasons:


    When the actual robot’s joint angles are very close to the target position, the actual robot notifies Mech-Viz software that it has reached its destination. This may cause Mech-Viz software to start executing subsequent operations before the actual robot has fully reached the target position. This typically occurs when the robot is moving at a slow speed.
    Solutions:

    • a. Adjust the robot’s speed to ensure that the robot reaches the target position before executing subsequent operations.
    • b. Add a wait time before sending the notification message.
  2. In non-master control mode, Mech-Viz software completes its simulation run before the actual robot starts moving. Therefore, it’s normal for messages to be sent in advance.

5. Q3 (2023/09/20)

I am currently using the master control mode. I’ve checked the notification step settings, and I noticed that some of them were not set to Robot must stop previously. I have now checked that option, and it seems that the situation has improved. I will try your suggestions, and I will provide updates if there are any further issues.

Can Adapter block Mech-Viz’s notification step and provide a response?

6. Q (2023/09/20)

Software Version: Mech-Mind Software Suite-1.7.4

I would like to ensure that once a notification is sent out, the Adapter can block Mech-Viz’s operation upon receiving the message and provide a response to the notification. Is it possible to fulfill the above requirement?

6. A (2023/09/21)

Hello.

You can achieve so by using Mech-Viz’s “Notify” and “Branch by Msg” features:

  1. You can communicate with the host system through the adapter using Mech-Viz’s “Notify” and “Branch by Msg” features. When a gripper action is needed, Mech-Viz sends a notification to the host system via the adapter. The host system completes the gripper action and returns a completion signal. The adapter then sets a message branch exit to send the signal to Mech-Viz, allowing the robot to proceed with the next action.
    image

  2. For information on programming the adapter, please refer to: Adapter Programming Guide. If you encounter technical issues while writing the adapter, you can contact Mech-Mind technical support for assistance.

Mech-DLK 2.4.2 Enhancing Efficiency in Deep Learning Image Annotation

7. Q1 (2023/09/19)

For irregular-shaped workpieces as shown in the figure, there are often numerous anchor points to annotate, and annotating a single image can be time-consuming. Are there any methods to improve annotation efficiency? Can data be annotated using other software and then imported into DLK for model training?

7. A1 (2023/09/20)

Hello:

  1. Annotating irregular-shaped workpieces as shown in the image can be challenging. In Mech-DLK, you can use the intelligent annotation tool to speed up the annotation process. For inaccuracies in anchor points, manual adjustments are still necessary.
  2. The current version of Mech-DLK supports importing pre-annotated datasets in the dlkdb format, allowing for distributed data annotation using Mech-DLK. The next version will also support importing datasets in coco format.

How to sort by the angle of rotation around the axis? First, capture the horizontal ones, and then capture the inclined ones.

8. Q (2023/09/22)

There are horizontal and inclined products, how do you sort them?

8. A (2023/09/22)

Hello, you can start by calculating the angle between the workpiece’s pose and the robot’s origin pose along the Z-axis. Then, take the absolute value of the angle (which represents the tilt of the workpiece). Next, sort the list of angle values in ascending order and output the index list. Finally, reorder the workpiece poses according to the index list, as shown in the steps below:

Note: Additionally, if you want to sort poses based on both tilt angle and height, you can calculate a height value list. Input both the angle value list and the height value list into the Step “Sort by Two Values”, and then reorder the poses based on the output index list.