This post compiles a collection of question-and-answer threads that have been translated into English by AI.
Please note: AI has its limitations, and specific content should still be discerned based on the actual context.
How can we adjust the 2D exposure for highly reflective metal parts when using the same-texture photography method with a laser camera?
If workpieces are closely attached, it will result in the issue of workpiece edge adhesion on the generated black and white image. Is it necessary to lower the target grayscale value and then enhance the image brightness again to use it?
- Traditional methods involve multiple-angle lighting to ensure workpiece edge contrast.
- A color version camera can be used to increase edge color contrast.
How to address the challenge of small workpiece point cloud matching?
Background: Battery cell gripping project, using LSR L camera based on scene requirements.
Scene Description: The stack height for the battery cell gripping project is 1400mm, and incoming material sizes vary. To determine gripper parameters based on the battery cell size, it’s necessary to measure the specific cell dimensions and maintain an overall accuracy within 1mm.
Accuracy Requirement: Within 1 mm
Current Problem Status: The LSR L camera meets scene requirements but cannot achieve the required sub-1mm accuracy.
Camera Model: LSR L
Camera Firmware Version: Release 2.02
In the current scenario, while the LSR L camera can cover the entire field of view, it falls short of meeting the size measurement requirements for individual batteries. To address this, an additional high-precision Pro S camera, installed using the EIH method in conjunction with the LSR L camera, can fulfill the size measurement requirements, thereby resolving the entire issue.
How should one adjust the camera exposure for relatively thin metal objects?
In situations like the one shown in the image, can contrast threshold be modified?
For projects involving disorderly grasping of objects like this, if high-quality 2D images are needed, consider adding additional lighting to achieve better results.
What are the advantages of binocular vision + structured light?
- Binocular vision + structured light or monocular vision + structured light can already achieve 3D reconstruction. What are the advantages of using binocular vision + structured light?
- Which models currently utilize binocular vision + structured light?
Binocular vision requires two 2D cameras, and by calculating the disparity between the two cameras, it can infer the three-dimensional structure of the scene.
- The projection of structured light patterns enhances the accuracy and robustness of depth information.
- Both binocular vision and structured light require significant computational resources for processing. Combining these two technologies allows us to leverage the parallel processing advantage of binocular vision to improve processing efficiency.
Mech-Mind intelligent cameras currently employ binocular vision + structured light. The models include LSR L, LSR S, DEEP, and PRO XS.
To what extent does the anti-light performance of different cameras reach?
- Camera: Laser L v3
- Firmware: Approximately 1.6.1
- Question: How many lux does the anti-light performance of the Laser v3 version camera reach?
Are there corresponding parameters for anti-light performance in other different camera models?
The standard for anti-ambient light performance is directly related to the object being measured and the distance of use.
- When capturing moderately reflective workpiece point clouds at a height of 1 meter in an ambient light environment of 80,000-100,000 lux, Laser L V3S can produce high-quality point cloud images.
- When capturing moderately reflective workpiece point clouds at a height of 2 meters in an ambient light environment of 40,000 lux, Laser L V3S can produce high-quality point cloud images.
Details for other models will be summarized in the community later.
When the visual system communicates with the robot, it connects successfully using standard interfaces and adapters. However, there is a continuous error when triggering the camera to take a photo. How can we resolve this issue?
Robot Brand and Model: Stäubli Robot
Camera Model: NANO-500M
Camera Installation Method: EIH
Communication Protocol: TCP/IP, message format in strings
Software Version: 1.7.0
Issue Description: Error in communication with the robot
We have attempted the following troubleshooting methods, but the root cause remains unclear:
- Used standard interfaces and reviewed the command information for standard interfaces.
- Employed the adapter and checked the generated files.
- Tested communication with the camera using a debugging assistant and tested communication with the robot using the debugging assistant.
In the log logs inside the ‘Center’ of the image, we can see that there are many ‘x00’ spaces following the received commands. It appears that the issue is related to the commands sent by the customer’s robot.
Smooth surface of circular metal part, point cloud gaps appear at the workpiece’s edge. How to resolve it?
See the flash and point cloud images below:
Analyzing the flash image:
- It can be inferred that the missing points in these areas are caused by mirror-like reflections, resulting in excessively dark points. The main reason for these point cloud gaps is low grayscale and signal-to-noise ratio.
- Increase the signal-to-noise ratio, enhance grayscale and contrast in these areas. You may consider raising the 3D exposure and increasing laser intensity as needed.
When adjusting the camera parameters, I noticed significant fluctuations in the point cloud. I’d like to inquire whether multiple camera exposures can alleviate these point cloud fluctuations. Are there any recommended settings?
A single high exposure can lead to overexposure and result in point cloud fluctuations. Would adding a low exposure after a high one help mitigate these fluctuations, especially when imaging black objects? Are there any recommended exposure parameters for such situations?
I’ve configured three exposures (2ms, 4ms, 20ms). Does the point cloud quality primarily depend on the initial 2ms exposure?
Additionally, will adjusting the contrast threshold affect the capture time, and when should one consider adjusting the contrast threshold?
Multiple exposures follow a first-come, first-served principle. In other words, if the low exposure captures the point cloud adequately, there’s no need to use the high exposure point cloud. Generally, high exposure tends to yield better point cloud quality. So, the sequence should prioritize high exposure, followed by the next highest, and then the lowest. Because of this first-come, first-served principle, the high exposure is more likely to produce less fluctuation.
Adjusting the contrast threshold typically has minimal impact on capture time. This is because the overall capture threshold is part of the point cloud post-processing, resulting in minimal to no effect on capture time.
Scene with neatly stacked turnover boxes, surface point cloud quality is good, but due to tight fit, edge extraction is challenging and occasional mismatch occurs. How to resolve this?
For such scenarios, would it be better to use a deep learning model for segmentation and then matching?
There are two approaches to try:
Scenario 1: First, attempt the new 3D edge extraction algorithm. If it provides sufficient constraints on the freedom of turnover box edges, use full-scene edge matching.
Scenario 2: If stable 3D edge features cannot be obtained, deep learning is still required.
What should be considered when using these two approaches? For instance, does training with deep learning take a long time? How should it be handled when there are many types and significant changes in lighting conditions?
Training with deep learning won’t take a long time. You can fine-tune the cardboard supermodel with 100 epochs. For scenarios with many types and significant changes in lighting conditions, simply add new categories and images with different lighting conditions to the base model for iteration.
(Note: Try to avoid overexposed or underexposed lighting conditions on-site.)
Mech-Vision version 1.7.0 creates logistics plans with a long runtime.
Creating logistics package plans: The individual block “Predict Pick Points V2” takes more than 3 seconds to run. Is this duration reasonable? The conveyor speed for express packages is quite fast.
Slow speed may be attributed to several factors, including:
- Inadequate GPU on the on-site running machine (We recommend using 20 series graphics cards).
- Incorrect adjustment of parameters for “Predict Pick Points V2.”
- Input images before predicting pickup points are too large; consider cropping the ROI (Region of Interest).
For specific issues, you can contact Mech-Mind engineers for remote assistance.
Do the API functions related to deep learning only support the four methods included in DLK? Is it possible to implement them for any object?
Regarding the deep learning issue, please also provide:
The latest available version of DLK is Mech-DLK-2.3.0. Currently, the intention is to embed it through the API, but due to the large volume of incoming data and the uncertainty involved, we would like to know if it is feasible to implement it for any object. This is because DLK does not support training on arbitrary objects.
As of now, the latest version, Mech-DLK-2.3.0, does not support training deep learning models for grasping arbitrary objects, and it does not support API calls for this purpose. The models for arbitrary objects are currently only used in Mech-Vision.
Mech-Eye Viewer 2.1.0 Camera Firmware Update
Issue: After applying Mech-Eye Viewer 2.1.0 camera firmware update and confirming the camera reboot, the camera does not appear in Mech-Eye Viewer for a long time. Clicking the “Camera List” button also does not show the camera.
- Checked for IP address conflicts - none found.
- Tried power-cycling the camera, unplugging and plugging it back, but it still cannot be detected in Mech-Eye Viewer.
Camera Model: LSR S
Alright, because LSR S is currently in the trial production phase, the latest 2.1.0 release version has not been adapted yet.
Temporary Solution: During the 2.1.0 firmware phase, a special version needs to be used.
May I inquire if our entire software suite can run on a Windows virtual machine?
Do we have any relevant practical project implementation experience?
In theory, it is possible, but it requires allocating sufficient memory and graphics memory to the virtual machine.
How can Mech-Vision software obtain a single-channel depth map?
I can obtain a depth map from Mech-Vision’s step, but it contains color information. Is it possible to convert it into a single-channel grayscale image with only height information?
The depth map obtained in Mech-Vision is a single-channel, 32-bit image. In other words, each pixel stores a float value to represent height information. The visualization of the depth map in Mech-Vision is in color, while some software visualizes it in black and white. Could you provide more details about the issue to see if there is a suitable solution?
Is it possible to convert it to a 16-bit or 8-bit depth map? Typical 2D algorithms do not support images in the CV_32FC1 format.
All depth maps are in single float format (32-bit). Are you referring to converting the depth map into a color image? (Note that such a color image may not be convertible into a point cloud.)
Mech-Vision uses 8-bit 4-channel storage to save the 32-bit single-channel depth map.
import numpy as np
img_depth = np.frombuffer(img.tobytes(),dtype=np.float32)
img_depth = img_depth.reshape(img.shape[0:2])
if name == ‘main’:
img = cv2.imread(‘depth_image_00000.png’,-1)
img_depth = change_image(img)
Regarding DLK deep learning, does the API version 2.3.0 support C#? When will C# support be added to the DLK API for version 2.4.0? Can you provide any relevant C# API calling demos?
For deep learning inquiries, please also provide:
A customer wishes to use DLK’s API for development, and they are using the C# language. They want to confirm if version 2.3.0 supports C# API or not, and when the C# version of the API for version 2.4.0 will be available. The customer needs a demo of DLK’s C# interface, if available.
Currently, Mech-SDK version 2.3.0 supports C#, and it includes C# related content within the software. Models in version 2.4.0, apart from classification and cascade models, can all be run using Mech-SDK version 2.3.0.
Mech-SDK version 2.4.0 will gradually be provided on Git. Initially, C interfaces will be available, and C++, C#, as well as Python interfaces will be provided later.
Which version of Mech-Eye Viewer should be used with Mech-Mind line laser cameras?
I’d like to inquire about which version of Mech-Eye Viewer should be used with Mech-Mind line laser cameras. I noticed it requires a special version, but I couldn’t find a download source on the website.
Hello, the software for line laser cameras is currently being used on a small scale.
At the moment, the installation package requires you to contact your sales manager or technical pre-sales support.
Thank you for your understanding.
Do you need to use an industrial computer with a GPU for any object functionality?
- step: Any object v2, not DLK training
- Mech-Vision version 1.7.1
Currently, it is necessary to use an industrial computer with a GPU for any object functionality.
Have we conducted any foreign object detection projects?
Images above are of detection images, and there may be foreign objects inside that need to be detected.
For a specific project evaluation, please contact Mech-Mind corresponding pre-sales team for assistance. Thank you for your cooperation.