AI-translated latest Q&A posts' collection (2023/09/28–2023/10/07)

This post compiles the latest questions and answers from the Q&A category translated by AI. The questions were posted between September 28, 2023, and October 7, 2023.
Please note: AI has its limitations, and specific content should still be discerned based on the actual context.

Robot End Effector Collision

1. Q1 (2023/09/28)


Path planning failure, indicating a collision between the robot’s end effector and its wrist joint. However, I can’t see where the collision is on the diagram. What could be the reason for this?

1. A1 (2023/09/28)

Hello, by clicking on the corresponding entry, the model will highlight the relevant portion, as shown in images below:
image

1. Q2 (2023/09/28)


I still can’t see where the collision is. It doesn’t appear to be in contact in the diagram?

1. A2 (2023/09/28)

Try saving the project and reopening the software to check if the issue persists. If it does, please upload the project for further assistance.

Also, could you please let us know the software version you are using?

1. Q3 (2023/09/28)

It’s fine now. I just toggled the Smart Obstacle Avoidance switch.

1. A3 (2023/09/28)

Hello, we’re glad the issue has disappeared. :grin:

However, in order to understand the root cause of the problem and prevent others from encountering it in the future, we would appreciate it if you could provide more information for analysis, such as the robot model, Mech-Viz project, and any differences in operations before and after the issue occurred.

Thank you very much!

Perform point cloud filtering in the Z-axis direction

2. Q (2023/09/28)


Instance segmentation isn’t very accurate, resulting in the segmentation of background point clouds. I’d like to filter out the lower-level point clouds. What’s the best way to do this?

2. A (2023/09/28)

For this portion of the point cloud, you can extract the upper-level point clouds by the Step “Get Highest Layer Clouds”. Pay attention to selecting the appropriate direction and the point cloud layer height.

Mech-Viz Path Planning

3. Q1 (2023/09/28)

The robot sent “201, 2, robot position” to initiate the Mech-Viz project with visual input. Then, it sent “205, 1” to retrieve the planned path.
However, there is a significant difference in the retrieval of pick-up points.
Now, the question is: in previous projects, the pick-up point location was determined through Mech-Vision and template matching. With Mech-Viz, how does it obtain pick-up points derived from Mech-Vision’s template matching?

3. A1 (2023/09/28)

The command “205” is meant to return all waypoints.
If you only want the path related to the Mech-Vision recognition results, you can deselect other unnecessary points for relative and fixed movements in the “Send Waypoint” option. Only keep the visual movements.


If you still encounter significant differences with Mech-Vision, please reply again.

3. Q2 (2023/09/28)

I have now disabled the sending of all other points, leaving only the visual movement point. However, the data sent still differs significantly. This data does not seem to be the point data obtained after Mech-Vision’s matching.

In Mech-Vision’s process, should the final output data be sent to Mech-Viz?
I’m a bit confused about this process. During path planning, how does Mech-Viz obtain the grab data from Mech-Vision’s product matching? Mech-Vision only sends external service data to Mech-Viz during the process of obtaining camera data. How does Mech-Viz receive the matched data afterwards?

3. A2 (2023/09/28)

Mech-Viz obtains data from Mech-Vision through two service names: “Vision Look” and “Check Look”:
image
image

3. Q3 (2023/09/28)

It seems that my setup here is connected to the Mech-Vision project. Now, I’m only sending data for the visual movement point, but the data is still incorrect. What other aspects can be investigated?



image
Is there an issue with the Mech-Vision output settings? When running Mech-Vision, why does the log indicate “output without control flow”?

3. Q4 (2023/10/02)

The problem has been resolved. I needed to send “205,2” instead.

I was requesting Cartesian coordinates, whereas previously, I was sending “205,1,” which is why it was not successful.

How to include point clouds outside the 3D ROI region in collision detection?

4. Q (2023/10/06)

How do I set it up as shown in the image above?
The picking area is in the red box area, and picking can be done if the product is inside the red area.
The green area needs to check whether there are any products and whether there will be collisions during picking.

4. A (2023/10/07)

Mech-Viz calculates collisions based on the received point clouds. It’s important to ensure whether Mech-Vision is sending point clouds from within the ROI or the entire camera field of view to Mech-Viz. Point clouds from the green area will participate in collision detection once they are sent to Mech-Viz (assuming “Detect collision between point cloud and others” is enabled in Mech-Viz).

Considerations for Coordinating Cameras and Gantry Robots

5. Q (2023/10/07)

Are there any considerations when coordinating cameras with gantry robots, including calibration, communication, or other aspects?

5. A (2023/10/07)

Considerations include:

  1. Determine whether the gantry’s coordinate system is left-handed or right-handed (In the case of a left-handed coordinate system, adjustments may be needed during calibration).
  2. Assess the camera’s installation position, whether it’s “to hand” or “in hand.” If it’s in hand, specify its location on the gantry (identify which degrees of freedom on the gantry affect the camera’s position).
  3. Communication methods typically involve standard interfaces or communication via adapters. Establish a communication protocol between both devices.

For specific issues, seek technical support from Mech-Mind during the implementation process.

Workpiece 3D Pose Sorting and Path Planning

6. Q (2023/10/04)


The workpiece gripper and workpieces are as shown in the image above.


When there are multiple workpieces, I use 3D pose sorting in ascending order of the X-axis coordinate. During path planning, it fails on the first attempt to pick up workpiece 4. How can it automatically select the best workpiece to grasp?

6. A (2023/10/07)

In your post, I see two issues:

Issue 1:
Path planning failure. To address this, you need to review the planning history to determine the cause of the planning failure. Based on the image, it might be due to a collision between the gripper and the point cloud of workpiece 2 when attempting to grasp workpiece 4.
Solution:

  1. Confirm whether the workpiece pose recognition in the Mech-Vision project is correct.
  2. Verify if workpiece 4 can actually be grasped on-site. If it can be grasped, check if Mech-Viz’s attempted grasping poses are inappropriate. You can try modifying tool symmetry, workpiece symmetry, or grasp clearance on-site to allow Mech-Viz to attempt different grasping poses.

Issue 2:
You mentioned “automatically selecting the best workpiece to grasp”. I understand this as needing to sort based on minimal point cloud collision.
Solution:

  • In the Mech-Vision project, you can use the Step “Extract 3D Points in Cuboid” to extract the point clouds in the gripper’s grasping direction. Sort based on the number of point clouds extracted, or directly delete poses with fewer extracted point clouds, and then output the poses in the Mech-Viz software. Below is a reference project setup approach.

Is there a method to convert a rotation vector into a quaternion?

7. Q1 (2023/10/06)


Currently, I’m converting quaternions into XYZ rotation vectors, splitting them into NumberLists, performing calculations on multiple data results, and then combining the results into a Vector3DList. However, I haven’t found a way to convert this data type back into quaternions or directly convert NumberLists into quaternion angles. As a result, the new pose lacks quaternion rotation vectors, and I can’t obtain a new pose with QuaternionList.

7. A1 (2023/10/07)


You can choose to use the Step shown in the screenshot above to combine quaternions.


The original quaternion can be decomposed into XYZ axes as shown in the Step above.

7. Q2 (2023/10/07)

It seems like I’ve made some progress in my research.

I used to think that each axis had only one value, but in reality, when you decompose a rotation vector, each axis has three numerical values. So, with three axes, you have a total of nine values. To represent a directional vector, you need to recombine these values into new XYZ values for a single axis. It doesn’t require a recalculation of the original nine values from the decomposition. It only needs two directional values, such as XY/YZ/ZX, to recreate a rotation vector.

I’m not sure if I’m understanding this correctly; I used to think you needed three directional values to regenerate new quaternion values.

ABB Standard Interface Instruction Invocation Timing

8. Q1 (2023/10/03)

Hello,

While testing the ABB standard interface for fetching sample programs using Mech-Viz for path planning, I noticed that there is a relatively long waiting time for setting up the Mech-Vision project formula and triggering the Mech-Viz project to run:

  • MM_Switch_Model takes 0.5 seconds.
  • MM_Start_Viz takes 0.6 seconds.
  • MM_Set_Branch takes 0.5 seconds.

Is there any way to shorten the waiting time for these three instructions?

8. A1 (2023/10/07)

Hello, are you testing this through the communication assistant?

Also, when you mention waiting time, do you mean that you trigger the instruction and then wait for a certain time (0.6 seconds) before the instruction is actually executed?

8. Q2 (2023/10/07)

Hello, I am recording the time it takes for each instruction to run in the ABB controller.

ClkReset checkTime;
ClkStart checkTime;
MM_Switch_Model 1,1;
ClkStop checkTime;
checkrecordTime:=ClkRead(checkTime);
TPWrite "MM_Switch_Model Time:= "+NumToStr(checkrecordTime,5);

ClkReset checkTime;
ClkStart checkTime;
MM_Start_Viz 1,snap_jps;
ClkStop checkTime;
checkrecordTime:=ClkRead(checkTime);
TPWrite "MM_Start_Viz Time:= "+NumToStr(checkrecordTime,5);

ClkReset checkTime;
ClkStart checkTime;
MM_Set_Branch 1,1;
ClkStop checkTime;
checkrecordTime:=ClkRead(checkTime);
TPWrite "MM_Set_Branch Time:= "+NumToStr(checkrecordTime,5);

The three lines of instructions each take the respective time to execute, totaling 1.5 seconds, after which the camera starts capturing.

MM_Switch_Model Time:= 0.5
MM_Start_Viz Time:= 0.6
MM_Set_Branch Time:= 0.5

8. A2 (2023/10/07)

Hello, due to ABB’s standard interface instructions, socket connections are short-lived, meaning that each instruction connects to the server, sends the instruction, and then disconnects the socket. This slightly extends the program’s execution time. In version 1.8.0 of the software, we will modify the socket to be a long-lived connection, which will resolve this issue.

You can either modify the program yourself or wait for the release of version 1.8.0. You can also try contacting Mech-Mind for further technical support.

8. Q3 (2023/10/07)

Thank you for your reply.

Since this currently has a significant impact on cycle time, I’d like to make the modifications myself. Could you provide the method and location for making these changes here?

8. A3 (2023/10/07)

MM_Module.7z (7.0 KB)
Hello, you can use the program above, but you will need to add MM_Open_Socket() as a socket connection instruction in the main program before triggering other standard interface instructions.