AI-translated Q&A posts' collection (compiled on 2023/09/15)

This post compiles a collection of question-and-answer threads that have been translated into English by AI.
Please note: AI has its limitations, and specific content should still be discerned based on the actual context.

Camera

Adjust LNX-8030 scan rate

1.1 Q(2023/06/13)

How can I switch the scanning speed of LNX 8030?

Hello, according to the specifications of the LNX-8030, the scanning speed ranges from 3.3 to 15 KHz. How can I adjust this in the application? Could you please explain how to make this adjustment?

1.1 A(2023/06/14)

Hello, the frequency is automatically adjusted based on the exposure time. When you adjust the exposure time, the scanning frequency displayed in the upper right corner will also change accordingly.

Mech-Eye 3D laser profiler

1.2 Q(2023/06/19)

3D Line Laser Profile Measurement Device

What is the maximum supported rail speed for LNX8030, 8080, and 8300 laser cameras?

1.2 A(2023/06/19)

This question primarily depends on the encoder resolution and interval time settings.

Currently, the maximum frame rate for the line laser camera is 15 kHz. If the encoder resolution is set to 1000 with 4 signal channels, meaning 1000 pulses are emitted per millimeter, and 250 pulses are emitted per channel, with a trigger interval of 1, then the speed would be calculated as follows: 15000 Hz / (250 * 1) = 60 mm/s. So, the speed is 60 mm/s.

For the fastest speed, still using this encoder as an example, and considering that the camera’s point cloud resolution in the y-direction is set to 23.5 μm for better display quality, and the encoder pulse equivalent is 1/250 = 0.004 mm, the optimal trigger interval time setting would be approximately 0.023 mm / 0.004 mm ≈ 6 s. Therefore, with this encoder, the maximum speed achievable with the line laser camera application is 360 mm/s. Please adjust these settings according to your specific requirements.

UHP positioning accuracy

1.3 Q(2023/06/28)

How to Improve UHP Camera Positioning Accuracy?

Project Name: Welding Guidance Project
Project Status: In Testing
Software Versions: Mech-Vision 1.7.2, Mech-Viz 1.7.2, Mech-Eye Viewer 2.1.0.
Camera: UHP-140
Issue: The project requires accuracy within 0.1 mm. Both surface matching and edge matching accuracy are above 0.1 mm. How can we improve the matching accuracy?
Product: Matching feature size is 30 mm x 20 mm.
image

Surface Matching:
image

Edge Matching:
image

Requirement: Improve positioning accuracy to within 0.1 mm.

1.3 A(2023/06/28)

You can enhance recognition accuracy through the following methods:

  1. Adjust the shooting angle to capture the workpiece feature point cloud.
  2. Fine-tune camera exposure parameters to optimize point cloud quality and reduce point cloud fluctuation errors.
  3. Optimize matching parameters to improve matching accuracy. Parameter tuning can refer to the DOC website: 3D Fine Matching Lite.
  4. Place the matching capture points on the workpiece to reduce mapping errors. You can contact Mecha technical personnel for remote on-site inspection in this regard.

Restore factory intrinsic parameters for camera

1.4 Q(2023/07/08)

How to Restore Factory Parameters for the Camera?

Camera Model: PRO-S
SN: KEM10231A403E002
How to Operate the Restoration of Factory Parameters?

1.4 A(2023/07/08)

  1. This camera, which was manufactured at the beginning of the year, does not currently have factory parameters built-in. Please pay attention to cameras manufactured after May 2023 for software versions and the method to restore factory parameters, as shown in the following diagram:


    image

  2. If the camera’s internal parameter calibration error is significant due to external forces, it is recommended to contact Mech-Mind technical personnel to assess whether it needs to be returned to the factory. If there is no external force and internal parameter correction is needed, you can refer to: Errors in the Intrinsic Parameters Are Large.

Mech-Eye camera typical capture time

1.5 Q(2023/04/24)

Mech-Mind 3D Camera Frame Rate

What are the frame rates for Mech-Mind’s various structured light cameras?

1.5 A(2023/04/24)

The more pertinent technical parameter for real-world applications is the typical capture time.
Mech-Mind’s official website has a comprehensive camera parameter table.
For more detailed parameters, please refer to the user manual: Technical Specifications (V3), Technical Specifications (V4).

Model NANO UHP-140 PRO S PRO M LSR L LSR S DEEP PRO XS LOG S LOG M
Typical capture time (s) 0.6–1.1 0.6–0.9 0.3–0.6 0.3–0.6 0.5–0.9 0.5–0.9 0.5–0.9 0.7–1.1 0.3–0.5 0.3–0.5

PRO M transfer delay

1.6 Q(2023/08/10)

PRO M Camera Actual Transfer Speed Discrepancy

Camera Model: PRO M V4
Firmware Version: 2.1.0
Capture Method: (ETH)
Working Distance: 1500mm
Mech-Eye Viewer Version: 2.1.0
Mech-Vision Version: 1.7.1
The transfer speed displayed on Mech-Eye Viewer is 872 Mbps.
image

With unchanged image transfer size:
Depth Image: 4,608,096 bytes (~4.6 Mb) calculated transfer time is 0.005s or 5ms (4.6 Mb/872 Mbps)
2D Image: 6,912,097 bytes (~6.9 Mb) calculated transfer time is 0.0079s or 7.9ms (6.9 Mb/872 Mbps)

Actual Capture Values (Mech-Eye Viewer Log):

Depth Image
Capture Time + Transfer Time = 0.753s
Capture Time = 0.703s
Actual Transfer Time: 0.753s - 0.703s = 0.050s or 50ms [Calculated Value: 5ms]

2D Image
Capture Time + Transfer Time = 0.1557s
Capture Time = 0.090s
Actual Transfer Time: 0.1557s - 0.090s = 0.0657s or 65.7ms [Calculated Value: 7.9ms]

May I ask which part is causing the delay in the transfer?

1.6 A(2023/08/11)

Hello, regarding your calculation of transfer times as demonstrated: “4,608,096 bytes (~4.6 Mb) calculated transfer time is 0.005s or 5ms (4.6Mb/872Mbps),” there are some minor issues.

To calculate the specific method for the file’s transfer time over the network, you can use the following formula:
Transfer Time (seconds) = File Size (bits) / Transfer Rate (bits/second)

Firstly, convert the file size from bytes (B) to bits (b):
File Size: 4.6 MB is equivalent to 4.6 × 8 = 36.8 Mb

Transfer Time = 36.8 Mb / 872 Mbps ≈ 0.0422 seconds = 42.2 milliseconds

The final result is approximately 42.2 milliseconds. This calculation doesn’t account for network latency, frame headers, and other factors. Therefore, the actual transfer time may vary slightly, and there’s also a small amount of encoding and decoding time in addition to network transfer time. Please refer to the actual values for accuracy.

Camera disconnection after restarting

1.7 Q(2023/08/14)

Camera version 2.1.0: After the camera is powered off and restarted, it shows that it cannot connect.

Camera version 2.1.0: After the camera is powered off and restarted, it still shows that it cannot connect. I have confirmed that the IP address is correct, the firewall is turned off, and Windows updates are paused.

1.7 A(2023/08/14)

Hello,
To address the current issue, we need some feedback regarding your camera model, serial number, specific camera firmware version, and the exact details of the connection problem. Is it a case where it can display that it cannot connect, or is it a software issue where it cannot display the IP? If possible, please upload screenshots.

For troubleshooting related to the camera not connecting, you can refer to Troubleshooting guidance for camera connection problems.

Software

Customized development in Mech-Vision 1.7.1

2.1 Q(2023/06/21)

Mech-Vision 1.7.1: Is it possible for secondary development to retrieve image rendering or point cloud data?

  • Software Version: Mech-Vision 1.7.1
  • Issue Description: In Mech-Vision 1.7.1, a customer has requested us to display images on the interface, as well as process data and final data. Can you provide a C# calling example or an API for this?

2.1 A(2023/06/21)

Hello, thank you for your feedback.

Currently, Mech-Vision 1.7.1 does not support general secondary development.

Possible ways to achieve the above include:

  • Saving images or data in steps for retrieval by the upper-level program.
  • Using the Python script function and employing a custom communication method to send images or data to the upper-level program.

Mech-Center 1.6.1 default password

2.3 Q(2023/07/10)

The default password for administrator privileges in Mech-Center V1.6.1 is what? I haven’t set it before.

2.3 A(2023/07/11)

The initial administrator password is: 123456.

Software version 1.7.2 cannot be used in laptop

2.4 Q(2023/07/15)

Software 1.7.2 cannot be used after downloading it onto my laptop.
After downloading the latest version of the software on my personal laptop, both Mech-Vision and Mech-Viz cannot be opened, and an error message appears: mmind_vision.exe: Start Error. How should I handle this?
image

2.4 A(2023/07/15)

You can check whether your laptop’s CPU version is too old:
When trying to install version 1.7.0 or higher, running the software results in an error message indicating that mmind_property.dll cannot be found. Using dependency analysis software, I couldn’t find any missing dependencies either.
image

The reason for this error is:
Your computer’s CPU is too old and does not support the AVX2 instruction set, which is enabled by default during the compilation of the 1.7.0 software.

The solution is:

  1. Do not use an Intel CPU older than the 6th generation.
  2. Compatible CPU list: Intel Product Specification Advanced Search.

Profile mismatch of brake disks

2.5 Q(2023/07/18)

Brake disc sand thickness is uneven. When using contour matching on the upper surface, it tilts when matching with the lower surface point cloud. How can this be resolved?
Software Version: 1.7.2.
Camera Model: LSR L.
image


(There is plastic on the last layer)

2.5 A(2023/07/18)

  1. The point cloud of the brake disc in the image appears subpar. You can try adjusting camera parameters to improve the point cloud quality.
  2. Consider preprocessing the point cloud to remove tray-related data during point cloud preprocessing to avoid interference from tray data.
  3. During the matching process, adjust matching parameters to enhance matching accuracy.

Convert 32-bit .png (in Mech-Vision) to .tiff (in Mech-Eye Viewer)

2.6 Q(2023/07/20)

The Mech-Vision project stores depth maps in a 32-bit PNG image format with a grid. How can I convert it to the TIFF format used by Mech-Eye software?

How can I convert RGBA four-channel data into single-channel depth data?

2.6 A(2023/07/20)

Hello.
In simple terms, we’re going from four 8-bit channels to one 32-bit channel. The diagram is as follows:
image

Reference code:

cv::Mat img32FC1 = cv::Mat(img8UC4.size(), CV_32FC1, img8UC4.data).clone()

You can also use OpenCV’s convertTo() function for the conversion. The key is to change from 8UC4 to 32FC1.
The binary data hasn’t changed; it’s all about how to parse it.

Carton dimensions: point-cloud-based or upper-level-computer-provided?

2.7 Q(2023/07/26)

In scenarios involving multi-zone suction cups on cardboard boxes, should Mech-Vision use the extracted point cloud-based box dimensions or the standard box dimensions provided by the upper-level computer? This is because both options may have some degree of error, which could impact suction cup planning.

For single-sized boxes, the upper-level computer sends the box dimensions, while Mech-Vision utilizes multi-zone suction cups for disassembly and collision detection with the grasped object.


image

2.7 A(2023/07/27)

In practical settings, box dimensions may exhibit slight variations from the standard measurements, and during the disassembly process, it is advisable to use the dimensions calculated from the actual point cloud data to prevent collisions and other issues.

Match thin brake discs with thick sand

2.8 Q(2023/08/01)

Match the brake discs with different sand thicknesses.

  • Software versions (including Mech-Vision 1.7.2, Mech-Eye Viewer 2.1).

  • Camera model LSR L.

  • Photography method (ETH).
    Working distance 2.9 meters to 3 meters.

  • Illustrations related to the issue (screenshots, videos, dynamic images):


  • Issue description (explain the problem in detail and its manifestations; if the problem is reproducible, describe the reproduction method).
    On-site recognition of brake discs at a height of approximately 2.9 to 3 meters. There is some point cloud fluctuation during image capture. There are multiple models of brake discs, and some models have strong reflections, causing missing points in the sand surface and point cloud. Some brake discs have varying thicknesses of sand. If point filtering is used to distinguish them, areas with significant point cloud fluctuations will disconnect and not form a complete circle. This leads to matching errors. If the point filter is set too small, it is impossible to separate the brake discs from the partitions. Is there a good solution to this situation?
    The on-site workstations are limited, and there are fences around the workstations. Will this affect the image capture quality?



2.8 A(2023/08/02)

  1. Comparing the data from the site, it was found that the reason for the above problem is the significant fluctuation in the lower-level point cloud:
    Camera height of 3.1 meters results in significant point cloud fluctuations that do not meet recognition requirements.


    Camera height of 2.68 meters results in minor point cloud fluctuations that can meet recognition requirements.
    image

  2. Methods to resolve the poor point cloud quality: Prioritize adjusting the camera exposure parameters to optimize the quality of the lower-level point cloud. If adjusting the exposure parameters does not resolve the issue, it is recommended to lower the camera suspension height to ensure the quality of the lowest-level point cloud.

Modify delimiter

2.9 Q(2023/08/02)

May I ask where the delimiter can be modified when using the standard TCP/IP interface?

  • Software version: Mech-Center 1.7.2
  • Robot model: Epson Robot
  • Issue description: Due to feedback from robot engineers, the delimiter is required to be a space. I couldn’t find a location to modify the delimiter in the ‘interface’ file under the ‘Mech-Center\src\interface’ directory. Is there a place where this can be modified? Are there plans to allow customization of both the delimiter and the end delimiter in the future?

2.9 A(2023/08/02)

Hello, this part involves source code, so it’s not convenient to provide it here. There are plans to open up this part of the interface in the future. If you have such a requirement, please contact the administrator for further communication.

Others

IPC communicates with PLC and robot simultaneously

3.1 Q(2023/06/12)

How can an industrial computer communicate simultaneously with a PLC and a robot?
Please provide as comprehensive information as possible for the technical team to address this.
Please include the following information in the question, as needed:

  • Software versions (including Mech-Vision, Mech-Viz, Mech-Eye Viewer): 1.7.2
  • Robot model: Mitsubishi RV-8CRL
  • The industrial computer needs to communicate simultaneously with the PLC and robot, with the requirement to send coordinates to the robot and x-y deviation values to the PLC.

3.1 A(2023/06/12)

It is recommended to establish communication with only one device on the vision side, and then have communication between the robot and PLC.
If the vision side communicates with both the PLC and robot simultaneously, it may increase the workload for communication configuration and could potentially introduce communication issues.
For specific projects, you can contact the Mech-Mind pre-sales solution team to discuss communication solutions together.

DELTA rail power supply

3.2 Q(2023/07/06)

Could you please provide a dimension diagram for the commonly used 24V switch power supply for the DIN rail power supply version?

Could you please provide a dimension diagram for the commonly used 24V switch power supply for the DIN rail power supply version? In the Bill of Materials (BOM), I can only see “Delta 120W,” without a specific model number. It’s difficult to find related information on the official website without a specific model number.

3.2 A(2023/07/06)

Delta DRP024V120W1AA:


Robot model not supported in Mech-Viz

3.3 Q(2023/08/14)

We’ve used a robot, but the software doesn’t support this model. How can we import a new robot model?

Issue:
I couldn’t find a specific robot in the Mech-Viz robot library. What steps should I take to make Mech-Viz software support this robot?

Software Version:
1.7.x

3.3 A(2023/08/14)

Background:
Robot models serve as resources for the software and can be imported at any time. Mech-Viz and Mech-Vision support the import of robot models in .mrob format.

You can resolve this issue by following these steps:

  1. Check if your model is available in Downloads: Robot 3D Models. If it’s there, you can download it from the library and import it into your local software.
  2. Create your robot model following the documentation on Robot Model and import it into your local software.
  3. Seek online technical support by posting in the community.
  4. For hotline consultation, you can reach out to 400-9696-010, and a sales or after-sales manager will assist you in resolving the issue.