AI-translated Q&A posts' collection (compiled on 2023/09/08)

This post compiles a collection of question-and-answer threads that have been translated into English by AI.
Please note: AI has its limitations, and specific content should still be discerned based on the actual context.

Camera

Adjust 2D exposure for highly reflective workobjects

1.1 Q (20230411)

How can we adjust the 2D exposure for highly reflective metal parts when using the same-texture photography method with a laser camera?
If workpieces are closely attached, it will result in the issue of workpiece edge adhesion on the generated black and white image. Is it necessary to lower the target grayscale value and then enhance the image brightness again to use it?
image

1.1 A (20230411)

  1. Traditional methods involve multiple-angle lighting to ensure workpiece edge contrast.
  2. A color version camera can be used to increase edge color contrast.

Point cloud matching for small workobjects

1.2 Q (20230410)

How to address the challenge of small workpiece point cloud matching?
Background: Battery cell gripping project, using LSR L camera based on scene requirements.
Scene Description: The stack height for the battery cell gripping project is 1400mm, and incoming material sizes vary. To determine gripper parameters based on the battery cell size, it’s necessary to measure the specific cell dimensions and maintain an overall accuracy within 1mm.
Accuracy Requirement: Within 1 mm
Current Problem Status: The LSR L camera meets scene requirements but cannot achieve the required sub-1mm accuracy.
Camera Model: LSR L
Camera Firmware Version: Release 2.02


1.2 A (20230411)

In the current scenario, while the LSR L camera can cover the entire field of view, it falls short of meeting the size measurement requirements for individual batteries. To address this, an additional high-precision Pro S camera, installed using the EIH method in conjunction with the LSR L camera, can fulfill the size measurement requirements, thereby resolving the entire issue.

Adjust camera exposure for thin metal workobjects

1.3 Q (20230412)

How should one adjust the camera exposure for relatively thin metal objects?


In situations like the one shown in the image, can contrast threshold be modified?

1.3 A (20230413)

For projects involving disorderly grasping of objects like this, if high-quality 2D images are needed, consider adding additional lighting to achieve better results.

Advantages of binocular vision & structured light

1.4 Q (2023/04/13)

What are the advantages of binocular vision + structured light?

  1. Binocular vision + structured light or monocular vision + structured light can already achieve 3D reconstruction. What are the advantages of using binocular vision + structured light?
  2. Which models currently utilize binocular vision + structured light?

1.4 A (2023/04/14)

  1. Binocular vision requires two 2D cameras, and by calculating the disparity between the two cameras, it can infer the three-dimensional structure of the scene.

    • The projection of structured light patterns enhances the accuracy and robustness of depth information.
    • Both binocular vision and structured light require significant computational resources for processing. Combining these two technologies allows us to leverage the parallel processing advantage of binocular vision to improve processing efficiency.
  2. Mech-Mind intelligent cameras currently employ binocular vision + structured light. The models include LSR L, LSR S, DEEP, and PRO XS.

Different cameras’ anti-light performance

1.5 Q (2023/06/01)

To what extent does the anti-light performance of different cameras reach?

  • Camera: Laser L v3
  • Firmware: Approximately 1.6.1
  • Question: How many lux does the anti-light performance of the Laser v3 version camera reach?
    Are there corresponding parameters for anti-light performance in other different camera models?

1.5 A (2023/06/02)

The standard for anti-ambient light performance is directly related to the object being measured and the distance of use.

  • When capturing moderately reflective workpiece point clouds at a height of 1 meter in an ambient light environment of 80,000-100,000 lux, Laser L V3S can produce high-quality point cloud images.
  • When capturing moderately reflective workpiece point clouds at a height of 2 meters in an ambient light environment of 40,000 lux, Laser L V3S can produce high-quality point cloud images.

Details for other models will be summarized in the community later.

Software

Camera capture error when communicating robot with vision system

2.1 Q (20230412)

When the visual system communicates with the robot, it connects successfully using standard interfaces and adapters. However, there is a continuous error when triggering the camera to take a photo. How can we resolve this issue?

Robot Brand and Model: Stäubli Robot
Camera Model: NANO-500M
Camera Installation Method: EIH
Communication Protocol: TCP/IP, message format in strings
Software Version: 1.7.0
Issue Description: Error in communication with the robot

We have attempted the following troubleshooting methods, but the root cause remains unclear:

  1. Used standard interfaces and reviewed the command information for standard interfaces.
  2. Employed the adapter and checked the generated files.
  3. Tested communication with the camera using a debugging assistant and tested communication with the robot using the debugging assistant.

2.1 A (20230412)

In the log logs inside the ‘Center’ of the image, we can see that there are many ‘x00’ spaces following the received commands. It appears that the issue is related to the commands sent by the customer’s robot.

Defective point cloud in edges of metal workobject

2.2 Q (20230413)

Smooth surface of circular metal part, point cloud gaps appear at the workpiece’s edge. How to resolve it?
See the flash and point cloud images below:
image
image

2.2 A (20230414)

Analyzing the flash image:

  1. It can be inferred that the missing points in these areas are caused by mirror-like reflections, resulting in excessively dark points. The main reason for these point cloud gaps is low grayscale and signal-to-noise ratio.
  2. Increase the signal-to-noise ratio, enhance grayscale and contrast in these areas. You may consider raising the 3D exposure and increasing laser intensity as needed.

Adjust camera parameters to mitigate point cloud fluctuation

2.3 Q (2023/04/14)

When adjusting the camera parameters, I noticed significant fluctuations in the point cloud. I’d like to inquire whether multiple camera exposures can alleviate these point cloud fluctuations. Are there any recommended settings?

A single high exposure can lead to overexposure and result in point cloud fluctuations. Would adding a low exposure after a high one help mitigate these fluctuations, especially when imaging black objects? Are there any recommended exposure parameters for such situations?

I’ve configured three exposures (2ms, 4ms, 20ms). Does the point cloud quality primarily depend on the initial 2ms exposure?

Additionally, will adjusting the contrast threshold affect the capture time, and when should one consider adjusting the contrast threshold?

2.3 A (2023/04/14)

  1. Multiple exposures follow a first-come, first-served principle. In other words, if the low exposure captures the point cloud adequately, there’s no need to use the high exposure point cloud. Generally, high exposure tends to yield better point cloud quality. So, the sequence should prioritize high exposure, followed by the next highest, and then the lowest. Because of this first-come, first-served principle, the high exposure is more likely to produce less fluctuation.

  2. Adjusting the contrast threshold typically has minimal impact on capture time. This is because the overall capture threshold is part of the point cloud post-processing, resulting in minimal to no effect on capture time.

3D edge mismatching between turnover boxes

2.4.1 Q (2023/04/14)

Scene with neatly stacked turnover boxes, surface point cloud quality is good, but due to tight fit, edge extraction is challenging and occasional mismatch occurs. How to resolve this?
For such scenarios, would it be better to use a deep learning model for segmentation and then matching?

2.4.1 A (2023/04/14)

There are two approaches to try:
Scenario 1: First, attempt the new 3D edge extraction algorithm. If it provides sufficient constraints on the freedom of turnover box edges, use full-scene edge matching.
Scenario 2: If stable 3D edge features cannot be obtained, deep learning is still required.

2.4.2 Q (2023/04/14)

What should be considered when using these two approaches? For instance, does training with deep learning take a long time? How should it be handled when there are many types and significant changes in lighting conditions?

2.4.2 A (2023/04/18)

Training with deep learning won’t take a long time. You can fine-tune the cardboard supermodel with 100 epochs. For scenarios with many types and significant changes in lighting conditions, simply add new categories and images with different lighting conditions to the base model for iteration.
(Note: Try to avoid overexposed or underexposed lighting conditions on-site.)

Logistics solution in Mech-Vision 1.7.0

2.5 Q (2023/04/21)

Mech-Vision version 1.7.0 creates logistics plans with a long runtime.
Creating logistics package plans: The individual block “Predict Pick Points V2” takes more than 3 seconds to run. Is this duration reasonable? The conveyor speed for express packages is quite fast.

2.5 A (2023/04/21)

Slow speed may be attributed to several factors, including:

  • Inadequate GPU on the on-site running machine (We recommend using 20 series graphics cards).
  • Incorrect adjustment of parameters for “Predict Pick Points V2.”
  • Input images before predicting pickup points are too large; consider cropping the ROI (Region of Interest).

For specific issues, you can contact Mech-Mind engineers for remote assistance.

DLK’s deep learning API

2.6 Q (2023/06/01)

Do the API functions related to deep learning only support the four methods included in DLK? Is it possible to implement them for any object?

Regarding the deep learning issue, please also provide:
The latest available version of DLK is Mech-DLK-2.3.0. Currently, the intention is to embed it through the API, but due to the large volume of incoming data and the uncertainty involved, we would like to know if it is feasible to implement it for any object. This is because DLK does not support training on arbitrary objects.

2.6 A (2023/06/01)

As of now, the latest version, Mech-DLK-2.3.0, does not support training deep learning models for grasping arbitrary objects, and it does not support API calls for this purpose. The models for arbitrary objects are currently only used in Mech-Vision.

Update camera firmware in Mech-Eye Viewer

2.7 Q (2023/06/01)

Mech-Eye Viewer 2.1.0 Camera Firmware Update
Issue: After applying Mech-Eye Viewer 2.1.0 camera firmware update and confirming the camera reboot, the camera does not appear in Mech-Eye Viewer for a long time. Clicking the “Camera List” button also does not show the camera.
Tested:

  1. Checked for IP address conflicts - none found.
  2. Tried power-cycling the camera, unplugging and plugging it back, but it still cannot be detected in Mech-Eye Viewer.

Camera Model: LSR S

2.7 A (2023/06/01)

Alright, because LSR S is currently in the trial production phase, the latest 2.1.0 release version has not been adapted yet.
Temporary Solution: During the 2.1.0 firmware phase, a special version needs to be used.

Mech-Mind Software Suite in Windows virtual machine

2.8 Q (2023/06/06)

May I inquire if our entire software suite can run on a Windows virtual machine?
Do we have any relevant practical project implementation experience?

2.8 A (2023/06/07)

In theory, it is possible, but it requires allocating sufficient memory and graphics memory to the virtual machine.

Acquire single channel depth maps in Mech-Vision

2.9.1 Q (2023/05/30)

How can Mech-Vision software obtain a single-channel depth map?

I can obtain a depth map from Mech-Vision’s step, but it contains color information. Is it possible to convert it into a single-channel grayscale image with only height information?

2.9.1 A (2023/05/30)

The depth map obtained in Mech-Vision is a single-channel, 32-bit image. In other words, each pixel stores a float value to represent height information. The visualization of the depth map in Mech-Vision is in color, while some software visualizes it in black and white. Could you provide more details about the issue to see if there is a suitable solution?

2.9.2 Q (2023/05/30)

Is it possible to convert it to a 16-bit or 8-bit depth map? Typical 2D algorithms do not support images in the CV_32FC1 format.

2.9.2 A (2023/06/02)

All depth maps are in single float format (32-bit). Are you referring to converting the depth map into a color image? (Note that such a color image may not be convertible into a point cloud.)

Mech-Vision uses 8-bit 4-channel storage to save the 32-bit single-channel depth map.

import numpy as np
import cv2

def change_image(img):
    img_depth = np.frombuffer(img.tobytes(),dtype=np.float32)
    img_depth = img_depth.reshape(img.shape[0:2])
return img_depth

if name == ‘main’:
    img = cv2.imread(‘depth_image_00000.png’,-1)
    img_depth = change_image(img)
    cv2.imwrite(‘depth_image_00000.tiff’,img_depth)

Whether Mech-SDK (2.3.0/2.4.0) supports C#

2.10 Q (2023/06/08)

Regarding DLK deep learning, does the API version 2.3.0 support C#? When will C# support be added to the DLK API for version 2.4.0? Can you provide any relevant C# API calling demos?

For deep learning inquiries, please also provide:

A customer wishes to use DLK’s API for development, and they are using the C# language. They want to confirm if version 2.3.0 supports C# API or not, and when the C# version of the API for version 2.4.0 will be available. The customer needs a demo of DLK’s C# interface, if available.

2.10 A (2023/06/08)

Currently, Mech-SDK version 2.3.0 supports C#, and it includes C# related content within the software. Models in version 2.4.0, apart from classification and cascade models, can all be run using Mech-SDK version 2.3.0.

Mech-SDK version 2.4.0 will gradually be provided on Git. Initially, C interfaces will be available, and C++, C#, as well as Python interfaces will be provided later.

Mech-Eye Viewer version for laser cameras

2.11 Q (2023/06/12)

Which version of Mech-Eye Viewer should be used with Mech-Mind line laser cameras?
I’d like to inquire about which version of Mech-Eye Viewer should be used with Mech-Mind line laser cameras. I noticed it requires a special version, but I couldn’t find a download source on the website.

2.11 A (2023/06/12)

Hello, the software for line laser cameras is currently being used on a small scale.
At the moment, the installation package requires you to contact your sales manager or technical pre-sales support.
Thank you for your understanding.

Others

IPC and GPU

3.1 Q (2023/05/24)

Do you need to use an industrial computer with a GPU for any object functionality?
Deep learning:

  • step: Any object v2, not DLK training
  • Mech-Vision version 1.7.1

3.1 A (2023/05/24)

Currently, it is necessary to use an industrial computer with a GPU for any object functionality.

Foreign object detection

3.2 Q (2023/05/29)

Have we conducted any foreign object detection projects?
image
image

Images above are of detection images, and there may be foreign objects inside that need to be detected.

3.2 A (2023/05/29)

For a specific project evaluation, please contact Mech-Mind corresponding pre-sales team for assistance. Thank you for your cooperation.