This post compiles the latest questions and answers from the Q&A category translated by AI. The questions were posted between September 28, 2023, and October 7, 2023.
Please note: AI has its limitations, and specific content should still be discerned based on the actual context.
Path planning failure, indicating a collision between the robot’s end effector and its wrist joint. However, I can’t see where the collision is on the diagram. What could be the reason for this?
Hello, by clicking on the corresponding entry, the model will highlight the relevant portion, as shown in images below:
I still can’t see where the collision is. It doesn’t appear to be in contact in the diagram?
Try saving the project and reopening the software to check if the issue persists. If it does, please upload the project for further assistance.
Also, could you please let us know the software version you are using?
It’s fine now. I just toggled the Smart Obstacle Avoidance switch.
Hello, we’re glad the issue has disappeared.
However, in order to understand the root cause of the problem and prevent others from encountering it in the future, we would appreciate it if you could provide more information for analysis, such as the robot model, Mech-Viz project, and any differences in operations before and after the issue occurred.
Thank you very much!
Instance segmentation isn’t very accurate, resulting in the segmentation of background point clouds. I’d like to filter out the lower-level point clouds. What’s the best way to do this?
For this portion of the point cloud, you can extract the upper-level point clouds by the Step “Get Highest Layer Clouds”. Pay attention to selecting the appropriate direction and the point cloud layer height.
The robot sent “201, 2, robot position” to initiate the Mech-Viz project with visual input. Then, it sent “205, 1” to retrieve the planned path.
However, there is a significant difference in the retrieval of pick-up points.
Now, the question is: in previous projects, the pick-up point location was determined through Mech-Vision and template matching. With Mech-Viz, how does it obtain pick-up points derived from Mech-Vision’s template matching?
Figure 1—Mech-Viz Path Planning for Collision Detection:
Figure 2—Mech-Viz Workflow:
Figure 3—Mech-Vision’s 3D Template Matching and Mapping to Pick Points:
Figure 4—Coordinates of Grab Points Outputted when Running Mech-Vision Independently:
Figure 5—Coordinates of Planned Points Sent by Mech-Viz:
The command “205” is meant to return all waypoints.
If you only want the path related to the Mech-Vision recognition results, you can deselect other unnecessary points for relative and fixed movements in the “Send Waypoint” option. Only keep the visual movements.
If you still encounter significant differences with Mech-Vision, please reply again.
I have now disabled the sending of all other points, leaving only the visual movement point. However, the data sent still differs significantly. This data does not seem to be the point data obtained after Mech-Vision’s matching.
In Mech-Vision’s process, should the final output data be sent to Mech-Viz?
I’m a bit confused about this process. During path planning, how does Mech-Viz obtain the grab data from Mech-Vision’s product matching? Mech-Vision only sends external service data to Mech-Viz during the process of obtaining camera data. How does Mech-Viz receive the matched data afterwards?
Mech-Viz obtains data from Mech-Vision through two service names: “Vision Look” and “Check Look”:
It seems that my setup here is connected to the Mech-Vision project. Now, I’m only sending data for the visual movement point, but the data is still incorrect. What other aspects can be investigated?
Is there an issue with the Mech-Vision output settings? When running Mech-Vision, why does the log indicate “output without control flow”?
The problem has been resolved. I needed to send “205,2” instead.
I was requesting Cartesian coordinates, whereas previously, I was sending “205,1,” which is why it was not successful.
How do I set it up as shown in the image above?
The picking area is in the red box area, and picking can be done if the product is inside the red area.
The green area needs to check whether there are any products and whether there will be collisions during picking.
Mech-Viz calculates collisions based on the received point clouds. It’s important to ensure whether Mech-Vision is sending point clouds from within the ROI or the entire camera field of view to Mech-Viz. Point clouds from the green area will participate in collision detection once they are sent to Mech-Viz (assuming “Detect collision between point cloud and others” is enabled in Mech-Viz).
Are there any considerations when coordinating cameras with gantry robots, including calibration, communication, or other aspects?
- Determine whether the gantry’s coordinate system is left-handed or right-handed (In the case of a left-handed coordinate system, adjustments may be needed during calibration).
- Assess the camera’s installation position, whether it’s “to hand” or “in hand.” If it’s in hand, specify its location on the gantry (identify which degrees of freedom on the gantry affect the camera’s position).
- Communication methods typically involve standard interfaces or communication via adapters. Establish a communication protocol between both devices.
For specific issues, seek technical support from Mech-Mind during the implementation process.
The workpiece gripper and workpieces are as shown in the image above.
When there are multiple workpieces, I use 3D pose sorting in ascending order of the X-axis coordinate. During path planning, it fails on the first attempt to pick up workpiece 4. How can it automatically select the best workpiece to grasp?
In your post, I see two issues:
Path planning failure. To address this, you need to review the planning history to determine the cause of the planning failure. Based on the image, it might be due to a collision between the gripper and the point cloud of workpiece 2 when attempting to grasp workpiece 4.
- Confirm whether the workpiece pose recognition in the Mech-Vision project is correct.
- Verify if workpiece 4 can actually be grasped on-site. If it can be grasped, check if Mech-Viz’s attempted grasping poses are inappropriate. You can try modifying tool symmetry, workpiece symmetry, or grasp clearance on-site to allow Mech-Viz to attempt different grasping poses.
You mentioned “automatically selecting the best workpiece to grasp”. I understand this as needing to sort based on minimal point cloud collision.
- In the Mech-Vision project, you can use the Step “Extract 3D Points in Cuboid” to extract the point clouds in the gripper’s grasping direction. Sort based on the number of point clouds extracted, or directly delete poses with fewer extracted point clouds, and then output the poses in the Mech-Viz software. Below is a reference project setup approach.
Currently, I’m converting quaternions into XYZ rotation vectors, splitting them into NumberLists, performing calculations on multiple data results, and then combining the results into a Vector3DList. However, I haven’t found a way to convert this data type back into quaternions or directly convert NumberLists into quaternion angles. As a result, the new pose lacks quaternion rotation vectors, and I can’t obtain a new pose with QuaternionList.
You can choose to use the Step shown in the screenshot above to combine quaternions.
The original quaternion can be decomposed into XYZ axes as shown in the Step above.
It seems like I’ve made some progress in my research.
I used to think that each axis had only one value, but in reality, when you decompose a rotation vector, each axis has three numerical values. So, with three axes, you have a total of nine values. To represent a directional vector, you need to recombine these values into new XYZ values for a single axis. It doesn’t require a recalculation of the original nine values from the decomposition. It only needs two directional values, such as XY/YZ/ZX, to recreate a rotation vector.
I’m not sure if I’m understanding this correctly; I used to think you needed three directional values to regenerate new quaternion values.
While testing the ABB standard interface for fetching sample programs using Mech-Viz for path planning, I noticed that there is a relatively long waiting time for setting up the Mech-Vision project formula and triggering the Mech-Viz project to run:
- MM_Switch_Model takes 0.5 seconds.
- MM_Start_Viz takes 0.6 seconds.
- MM_Set_Branch takes 0.5 seconds.
Is there any way to shorten the waiting time for these three instructions?
Hello, are you testing this through the communication assistant?
Also, when you mention waiting time, do you mean that you trigger the instruction and then wait for a certain time (0.6 seconds) before the instruction is actually executed?
Hello, I am recording the time it takes for each instruction to run in the ABB controller.
TPWrite "MM_Switch_Model Time:= "+NumToStr(checkrecordTime,5);
TPWrite "MM_Start_Viz Time:= "+NumToStr(checkrecordTime,5);
TPWrite "MM_Set_Branch Time:= "+NumToStr(checkrecordTime,5);
The three lines of instructions each take the respective time to execute, totaling 1.5 seconds, after which the camera starts capturing.
MM_Switch_Model Time:= 0.5
MM_Start_Viz Time:= 0.6
MM_Set_Branch Time:= 0.5
Hello, due to ABB’s standard interface instructions, socket connections are short-lived, meaning that each instruction connects to the server, sends the instruction, and then disconnects the socket. This slightly extends the program’s execution time. In version 1.8.0 of the software, we will modify the socket to be a long-lived connection, which will resolve this issue.
You can either modify the program yourself or wait for the release of version 1.8.0. You can also try contacting Mech-Mind for further technical support.
Thank you for your reply.
Since this currently has a significant impact on cycle time, I’d like to make the modifications myself. Could you provide the method and location for making these changes here?
MM_Module.7z (7.0 KB)
Hello, you can use the program above, but you will need to add MM_Open_Socket() as a socket connection instruction in the main program before triggering other standard interface instructions.