This post compiles the latest questions and answers from the Q&A category translated by AI. The questions were posted between October 8, 2023, and October 13, 2023.
Please note: AI has its limitations, and specific content should still be discerned based on the actual context.
Hello, I want to transform a point in the camera coordinate system to the position in the robot’s base coordinate system. How can I do this?
I tried using the Step “Adjust Poses”, but it gives an error, saying there’s no scene point cloud available for reference in the current project.
Hello, in Mech-Vision, you can perform the pose transformation from the camera reference frame to the robot’s reference frame using the Step “Transform Poses”.
Software Version: Mech-Vision 1.7.2
Problem Description: The workshop lighting is relatively dim, and there are black powder-coated hubs on the conveyor line. After the camera takes photos, the quality of the depth map is poor, as shown in the images below:
First group of color images and depth maps:
Second group of color images and depth maps:
Other depth maps
How can this issue be resolved?
From the information provided, it appears that you are using a camera to capture an image of a black hub, but the resulting depth map quality is poor. I suspect this might be due to a short 3D Exposure Time. You can try the following steps to adjust and address the issue:
- Adjust camera exposure: Check if a 2D or 3D image is overexposed or underexposed by Mech-Eye Viewer or third-party software
Refer to the section titled How to check if a 3D depth map is overexposed or underexposed in the linked post above to diagnose the specific cause of your issue. If the flash image’s Gray Value in the area of missing point clouds are low, it is likely due to underexposure.
- Proposed Solutions
Point Cloud Loss Due to Underexposure: a) Increase the 3D Exposure Time; b) Change the camera’s Visibility level to “Expert” and attempt to increase camera gain.
Point Cloud Loss Due to Overexposure: Decrease the 3D Exposure Time.
- Please note that the screenshots provided are for the “calib” camera Parameter Group, which is used during calibration. When making adjustments, make sure to modify the corresponding parameter group in the Mech-Vision project to avoid issues where adjustments do not take effect.
Hello, I encountered the following issue after renaming .vis files. How can I resolve this?
Hello, if you make the project folder name match the name of the .vis file, that should resolve the problem.
Additionally, you can rename it directly by right-clicking in the project list:
In the image above, when the gripper is picking up items from the base plate, path planning is required to detect other objects beside the workpiece and avoid collisions with the gripper.
However, during collision detection, path planning consistently fails. Is it due to the detection of collisions between the base plate and the workpiece? And how can we prevent the base plate from participating in point cloud collision detection?
Hello, regarding collision detection settings, you need to set the voxel size as the first parameter, preferably 1 mm.
Second, regarding the point cloud in Mech-Viz, you can process the point cloud within the Mech-Vision project, only outputting the workpiece’s point cloud to Mech-Viz, excluding the point cloud of the base plate.
Hello, the end effector is no longer in contact with the point cloud from all directions. Why do we still receive a point cloud collision error?
After adjusting the “Point cloud cube edge length” to 1.0 mm, you should no longer receive collision warnings. You can give it a try.
Hello, I would like to calculate the depth distance of a pixel at a certain point from Mech-Mind’s depth map. How should I go about this?
The depth map is stored in RGB-D format, 32 bits. I understand that the first 24 bits are used to store RGB values, and the last 8 bits represent the depth value. Is it sufficient to calculate the depth distance using only the last 8 bits, even though the maximum value for the last 8 bits is only 256?
Hello, the depth map output by Mech-Eye does not store color information but rather uses 32-bit data to store the z-values in float format. The color information displayed on the software interface is rendered and displayed based on the depth values.
You can use the Mech-Eye API to obtain the depth map, and you can refer to this link: Mech-Eye API.
For questions regarding the depth map format, you can check out this resource: Image data format in Mech-Eye API.
STL models can be converted into OBJ format in ways other than using the model editor. Is it possible to directly convert STL format to OBJ format using other 3D software and then import it into Mech-Viz for collision purposes? I have tried using other 3D software to convert the format, but it appears to be incompatible when imported into Mech-Viz.
It is possible that the OBJ format models you exported from other 3D software are not convex bodies, which makes Mech-Viz consider them as illegal. If it is convenient, could you please share an image of the model you exported for us to take a look?
Are there any other 3D software options available that can convert to a convex OBJ format?
Currently, there are no other 3D software options that can directly export convex OBJ models.
Additionally, “convex body” refers to “an object where any line segment connecting two points inside the object does not intersect with the object”.
When I plug in the license dongle and try to open Mech-Vision and Mech-Viz, I get a “Start Error.” Restarting the computer and reinstalling the software does not resolve the issue.
This situation is typically caused by missing DLL files on your computer. For specific solutions, please refer to the following link: [Software Error Message] ‘missing mmind_xxx.dll’ popup on Mech-Vision/Mech-Viz.
We’ve encountered several instances of program crashes and flashes, and the specific reasons are unknown, but most often occur during program modifications. Can you add an automatic program backup feature for program crashes?
We will consider and discuss whether there is a feasible solution, but since a software crash indicates that the software is already behaving abnormally, for example, in cases where exceptions cannot be captured, it may still be impossible to execute the backup function as intended.
I’d like to ask which software you are referring to regarding the crash and flash issues, and what is the corresponding software version? Can the crash issue be reliably reproduced? If it can be reliably reproduced, can you provide specific steps to reproduce the issue or relevant engineering data? If the problem is intermittent, could you please provide crash-related information to help us better pinpoint the specific problem for repair? Information on collecting crash-related data can be found here: Collect Information about the Issue
Hello, regarding the “automatic backup during software crashes,” this is currently technically unfeasible. Currently, we can only perform backups in advance.
Our goal is to resolve the crash issue, and if you are experiencing software crash problems, you can reach out to our technical support at any time.
In the manual’s robot configuration description, the drawings are not very clear. Could you provide a detailed explanation of the coordinate settings for each axis?
Thank you for your feedback. This robot configuration is a typical SphericalWrist_SixAxis configuration.
The main feature of this configuration is that when viewed from the front, the centers of rotation for the second and third axes are vertical, while the centers of rotation for the fourth, fifth, and sixth axes are horizontal.
When viewed from the side, the center of the flange is aligned vertically with the base center (the centers of rotation for each axis correspond to the position of the coordinate system).
All centers of rotation are projected onto the robot’s neutral plane, which is a plane that is vertical to the base ground and passes through the center of the flange end face circle (for some robots, the right-side reference plane meets these two conditions and can be directly considered as the neutral plane).
A convenient way to confirm the centers of rotation is to create a sketch on the neutral plane, select the circular feature at the rotational joint, and generate a circular sketch. The center of this circular sketch is the center of rotation and serves as a reference point for establishing the coordinate system!
Question 1: Is the robot’s model a single overall model, or should each arm have its own exported model?
From the diagram, it seems like exporting a single overall model is sufficient. However, the manual suggests that individual components should be exported (as shown in the screenshot).
Question 2: When looking at the six coordinate systems from the side, are all the origins vertically aligned on the same line? Do their XYZ directions all follow the same orientation? If the base’s center of rotation and the six axes’ centers of rotation are offset, how should this be handled?
Question 3: The model format required for robot import is .mrob files, but the models we create are in STL format. When importing into the Visions robot, it doesn’t recognize them.
In the C:/Users/Administrator/AppData/Roaming/Mmind/roboty folder path, there is a .3dpkg file. How is this file generated?
Questions 1 and 3 are as follows: Mech-Viz software between versions 1.5 and 1.8 had the functionality to package the STL models of the joints we added along with an STL folder into a .3dpkg file. This is why when we added STL files, they would appear as .3dpkg files when we revisited the folder. The mentioned export is the process of converting the robot files we add into .mrob format (for easy loading on different devices) - this step is likely covered in the tutorial. You can do this by selecting “Tools” from the Mech-Viz top menu and choosing to export the current robot.
Additionally, in question 3, the issue described, “models are in STL format but not recognized when importing into Visions robot,” can be investigated in two ways:
- Check if the names are compliant.
- Check if the algorithm parameter file punctuation is accurate and in compliance.
Question 2: The SphericalWrist_SixAxis configuration is viewed from the side in a straight line, and this is one of the criteria for determining the robot configuration. If you notice that it is not in a vertical line, it means that this robot is not suitable for the SphericalWrist_SixAxis configuration. With few exceptions for certain brand robots, the XYZ directions are nearly uniform.
Can we trigger two branches when communicating with a PLC?
Is there a way to achieve this when we want to trigger two branch changes?
Hello! Are you using the standard interface for communication? If so, which specific interface is it?
It is possible to trigger multiple branches by controlling the logic through changes in the status codes.
We are using the standard interface and a Siemens PLC. Do we achieve this by invoking the two branches continuously?
Is the interface circled in red in this diagram? Is the program you’re using self-written, or are you using the “MM_Set_Branch” function? It involves interacting with this instruction twice for visual continuity.
If you are using the “Siemens PLC Client” interface, you can clear the status code after the first successful “Switch the branch” and then execute the “MM_Set_Branch” function again. Based on the new status code value, you can determine whether the second branch switch was successful.
I’d like to inquire about the typical image capture time for various models. Could you provide a general range?