This post compiles a collection of question-and-answer threads that have been translated into English by AI.
Please note: AI has its limitations, and specific content should still be discerned based on the actual context.
How can I switch the scanning speed of LNX 8030?
Hello, according to the specifications of the LNX-8030, the scanning speed ranges from 3.3 to 15 KHz. How can I adjust this in the application? Could you please explain how to make this adjustment?
Hello, the frequency is automatically adjusted based on the exposure time. When you adjust the exposure time, the scanning frequency displayed in the upper right corner will also change accordingly.
3D Line Laser Profile Measurement Device
What is the maximum supported rail speed for LNX8030, 8080, and 8300 laser cameras?
This question primarily depends on the encoder resolution and interval time settings.
Currently, the maximum frame rate for the line laser camera is 15 kHz. If the encoder resolution is set to 1000 with 4 signal channels, meaning 1000 pulses are emitted per millimeter, and 250 pulses are emitted per channel, with a trigger interval of 1, then the speed would be calculated as follows: 15000 Hz / (250 * 1) = 60 mm/s. So, the speed is 60 mm/s.
For the fastest speed, still using this encoder as an example, and considering that the camera’s point cloud resolution in the y-direction is set to 23.5 μm for better display quality, and the encoder pulse equivalent is 1/250 = 0.004 mm, the optimal trigger interval time setting would be approximately 0.023 mm / 0.004 mm ≈ 6 s. Therefore, with this encoder, the maximum speed achievable with the line laser camera application is 360 mm/s. Please adjust these settings according to your specific requirements.
How to Improve UHP Camera Positioning Accuracy?
Project Name: Welding Guidance Project
Project Status: In Testing
Software Versions: Mech-Vision 1.7.2, Mech-Viz 1.7.2, Mech-Eye Viewer 2.1.0.
Issue: The project requires accuracy within 0.1 mm. Both surface matching and edge matching accuracy are above 0.1 mm. How can we improve the matching accuracy?
Product: Matching feature size is 30 mm x 20 mm.
Requirement: Improve positioning accuracy to within 0.1 mm.
You can enhance recognition accuracy through the following methods:
- Adjust the shooting angle to capture the workpiece feature point cloud.
- Fine-tune camera exposure parameters to optimize point cloud quality and reduce point cloud fluctuation errors.
- Optimize matching parameters to improve matching accuracy. Parameter tuning can refer to the DOC website: 3D Fine Matching Lite.
- Place the matching capture points on the workpiece to reduce mapping errors. You can contact Mecha technical personnel for remote on-site inspection in this regard.
How to Restore Factory Parameters for the Camera?
Camera Model: PRO-S
How to Operate the Restoration of Factory Parameters?
This camera, which was manufactured at the beginning of the year, does not currently have factory parameters built-in. Please pay attention to cameras manufactured after May 2023 for software versions and the method to restore factory parameters, as shown in the following diagram:
If the camera’s internal parameter calibration error is significant due to external forces, it is recommended to contact Mech-Mind technical personnel to assess whether it needs to be returned to the factory. If there is no external force and internal parameter correction is needed, you can refer to: Errors in the Intrinsic Parameters Are Large.
Mech-Mind 3D Camera Frame Rate
What are the frame rates for Mech-Mind’s various structured light cameras?
The more pertinent technical parameter for real-world applications is the typical capture time.
Mech-Mind’s official website has a comprehensive camera parameter table.
For more detailed parameters, please refer to the user manual: Technical Specifications (V3), Technical Specifications (V4).
|Typical capture time (s)
PRO M Camera Actual Transfer Speed Discrepancy
Camera Model: PRO M V4
Firmware Version: 2.1.0
Capture Method: (ETH)
Working Distance: 1500mm
Mech-Eye Viewer Version: 2.1.0
Mech-Vision Version: 1.7.1
The transfer speed displayed on Mech-Eye Viewer is 872 Mbps.
With unchanged image transfer size:
Depth Image: 4,608,096 bytes (~4.6 Mb) calculated transfer time is 0.005s or 5ms (4.6 Mb/872 Mbps)
2D Image: 6,912,097 bytes (~6.9 Mb) calculated transfer time is 0.0079s or 7.9ms (6.9 Mb/872 Mbps)
Actual Capture Values (Mech-Eye Viewer Log):
Capture Time + Transfer Time = 0.753s
Capture Time = 0.703s
Actual Transfer Time: 0.753s - 0.703s = 0.050s or 50ms [Calculated Value: 5ms]
Capture Time + Transfer Time = 0.1557s
Capture Time = 0.090s
Actual Transfer Time: 0.1557s - 0.090s = 0.0657s or 65.7ms [Calculated Value: 7.9ms]
May I ask which part is causing the delay in the transfer?
Hello, regarding your calculation of transfer times as demonstrated: “4,608,096 bytes (~4.6 Mb) calculated transfer time is 0.005s or 5ms (4.6Mb/872Mbps),” there are some minor issues.
To calculate the specific method for the file’s transfer time over the network, you can use the following formula:
Transfer Time (seconds) = File Size (bits) / Transfer Rate (bits/second)
Firstly, convert the file size from bytes (B) to bits (b):
File Size: 4.6 MB is equivalent to 4.6 × 8 = 36.8 Mb
Transfer Time = 36.8 Mb / 872 Mbps ≈ 0.0422 seconds = 42.2 milliseconds
The final result is approximately 42.2 milliseconds. This calculation doesn’t account for network latency, frame headers, and other factors. Therefore, the actual transfer time may vary slightly, and there’s also a small amount of encoding and decoding time in addition to network transfer time. Please refer to the actual values for accuracy.
Camera version 2.1.0: After the camera is powered off and restarted, it shows that it cannot connect.
Camera version 2.1.0: After the camera is powered off and restarted, it still shows that it cannot connect. I have confirmed that the IP address is correct, the firewall is turned off, and Windows updates are paused.
To address the current issue, we need some feedback regarding your camera model, serial number, specific camera firmware version, and the exact details of the connection problem. Is it a case where it can display that it cannot connect, or is it a software issue where it cannot display the IP? If possible, please upload screenshots.
For troubleshooting related to the camera not connecting, you can refer to Troubleshooting guidance for camera connection problems.
Mech-Vision 1.7.1: Is it possible for secondary development to retrieve image rendering or point cloud data?
- Software Version: Mech-Vision 1.7.1
- Issue Description: In Mech-Vision 1.7.1, a customer has requested us to display images on the interface, as well as process data and final data. Can you provide a C# calling example or an API for this?
Hello, thank you for your feedback.
Currently, Mech-Vision 1.7.1 does not support general secondary development.
Possible ways to achieve the above include:
- Saving images or data in steps for retrieval by the upper-level program.
- Using the Python script function and employing a custom communication method to send images or data to the upper-level program.
The default password for administrator privileges in Mech-Center V1.6.1 is what? I haven’t set it before.
The initial administrator password is:
Software 1.7.2 cannot be used after downloading it onto my laptop.
After downloading the latest version of the software on my personal laptop, both Mech-Vision and Mech-Viz cannot be opened, and an error message appears: mmind_vision.exe: Start Error. How should I handle this?
You can check whether your laptop’s CPU version is too old:
When trying to install version 1.7.0 or higher, running the software results in an error message indicating that mmind_property.dll cannot be found. Using dependency analysis software, I couldn’t find any missing dependencies either.
The reason for this error is:
Your computer’s CPU is too old and does not support the AVX2 instruction set, which is enabled by default during the compilation of the 1.7.0 software.
The solution is:
- Do not use an Intel CPU older than the 6th generation.
- Compatible CPU list: Intel Product Specification Advanced Search.
Brake disc sand thickness is uneven. When using contour matching on the upper surface, it tilts when matching with the lower surface point cloud. How can this be resolved?
Software Version: 1.7.2.
Camera Model: LSR L.
(There is plastic on the last layer)
- The point cloud of the brake disc in the image appears subpar. You can try adjusting camera parameters to improve the point cloud quality.
- Consider preprocessing the point cloud to remove tray-related data during point cloud preprocessing to avoid interference from tray data.
- During the matching process, adjust matching parameters to enhance matching accuracy.
The Mech-Vision project stores depth maps in a 32-bit PNG image format with a grid. How can I convert it to the TIFF format used by Mech-Eye software?
How can I convert RGBA four-channel data into single-channel depth data?
In simple terms, we’re going from four 8-bit channels to one 32-bit channel. The diagram is as follows:
cv::Mat img32FC1 = cv::Mat(img8UC4.size(), CV_32FC1, img8UC4.data).clone()
You can also use OpenCV’s convertTo() function for the conversion. The key is to change from 8UC4 to 32FC1.
The binary data hasn’t changed; it’s all about how to parse it.
In scenarios involving multi-zone suction cups on cardboard boxes, should Mech-Vision use the extracted point cloud-based box dimensions or the standard box dimensions provided by the upper-level computer? This is because both options may have some degree of error, which could impact suction cup planning.
For single-sized boxes, the upper-level computer sends the box dimensions, while Mech-Vision utilizes multi-zone suction cups for disassembly and collision detection with the grasped object.
In practical settings, box dimensions may exhibit slight variations from the standard measurements, and during the disassembly process, it is advisable to use the dimensions calculated from the actual point cloud data to prevent collisions and other issues.
Match the brake discs with different sand thicknesses.
Software versions (including Mech-Vision 1.7.2, Mech-Eye Viewer 2.1).
Camera model LSR L.
Photography method (ETH).
Working distance 2.9 meters to 3 meters.
Illustrations related to the issue (screenshots, videos, dynamic images):
Issue description (explain the problem in detail and its manifestations; if the problem is reproducible, describe the reproduction method).
On-site recognition of brake discs at a height of approximately 2.9 to 3 meters. There is some point cloud fluctuation during image capture. There are multiple models of brake discs, and some models have strong reflections, causing missing points in the sand surface and point cloud. Some brake discs have varying thicknesses of sand. If point filtering is used to distinguish them, areas with significant point cloud fluctuations will disconnect and not form a complete circle. This leads to matching errors. If the point filter is set too small, it is impossible to separate the brake discs from the partitions. Is there a good solution to this situation?
The on-site workstations are limited, and there are fences around the workstations. Will this affect the image capture quality?
Comparing the data from the site, it was found that the reason for the above problem is the significant fluctuation in the lower-level point cloud:
Camera height of 3.1 meters results in significant point cloud fluctuations that do not meet recognition requirements.
Camera height of 2.68 meters results in minor point cloud fluctuations that can meet recognition requirements.
Methods to resolve the poor point cloud quality: Prioritize adjusting the camera exposure parameters to optimize the quality of the lower-level point cloud. If adjusting the exposure parameters does not resolve the issue, it is recommended to lower the camera suspension height to ensure the quality of the lowest-level point cloud.
May I ask where the delimiter can be modified when using the standard TCP/IP interface?
- Software version: Mech-Center 1.7.2
- Robot model: Epson Robot
- Issue description: Due to feedback from robot engineers, the delimiter is required to be a space. I couldn’t find a location to modify the delimiter in the ‘interface’ file under the ‘Mech-Center\src\interface’ directory. Is there a place where this can be modified? Are there plans to allow customization of both the delimiter and the end delimiter in the future?
Hello, this part involves source code, so it’s not convenient to provide it here. There are plans to open up this part of the interface in the future. If you have such a requirement, please contact the administrator for further communication.
How can an industrial computer communicate simultaneously with a PLC and a robot?
Please provide as comprehensive information as possible for the technical team to address this.
Please include the following information in the question, as needed:
- Software versions (including Mech-Vision, Mech-Viz, Mech-Eye Viewer): 1.7.2
- Robot model: Mitsubishi RV-8CRL
- The industrial computer needs to communicate simultaneously with the PLC and robot, with the requirement to send coordinates to the robot and x-y deviation values to the PLC.
It is recommended to establish communication with only one device on the vision side, and then have communication between the robot and PLC.
If the vision side communicates with both the PLC and robot simultaneously, it may increase the workload for communication configuration and could potentially introduce communication issues.
For specific projects, you can contact the Mech-Mind pre-sales solution team to discuss communication solutions together.
Could you please provide a dimension diagram for the commonly used 24V switch power supply for the DIN rail power supply version?
Could you please provide a dimension diagram for the commonly used 24V switch power supply for the DIN rail power supply version? In the Bill of Materials (BOM), I can only see “Delta 120W,” without a specific model number. It’s difficult to find related information on the official website without a specific model number.
We’ve used a robot, but the software doesn’t support this model. How can we import a new robot model?
I couldn’t find a specific robot in the Mech-Viz robot library. What steps should I take to make Mech-Viz software support this robot?
Robot models serve as resources for the software and can be imported at any time. Mech-Viz and Mech-Vision support the import of robot models in .mrob format.
You can resolve this issue by following these steps:
- Check if your model is available in Downloads: Robot 3D Models. If it’s there, you can download it from the library and import it into your local software.
- Create your robot model following the documentation on Robot Model and import it into your local software.
- Seek online technical support by posting in the community.
- For hotline consultation, you can reach out to 400-9696-010, and a sales or after-sales manager will assist you in resolving the issue.