AI-translated Q&A posts' collection (compiled on 2023/09/22)

This post compiles a collection of question-and-answer threads that have been translated into English by AI.
Please note: AI has its limitations, and specific content should still be discerned based on the actual context.

Camera

PRO M camera 3D capture Gain affects the grayscale of 2D images

1.1.1 Q (2023/08/10)

Camera Model: PRO M V4
Firmware Version: 2.1.0
Capture Mode: (ETH)
Working Distance: 1500mm
Mech-Eye Viewer Version: 2.1.0
Mech-Vision Version: 1.7.1

Why does the Gain for 3D capture affect the exposure of 2D capture?

Parameters for the images below:
3D Capture Parameters: Exposure time: 10.1ms, Gain: 0dB
2D Capture Parameters: Exposure time: 55.1ms

When increasing the Gain for 3D capture from 0dB to 10dB (while keeping 2D Exposure parameters unchanged), the image becomes overexposed.

1.1.1 A (2023/08/11)

Hello,
First, let’s talk about “Gain”: Gain is a parameter that represents pre-amplification in the readout circuit of the imaging sensor pixels. Increasing the gain will increase the pre-amplification of the sensor, making it more sensitive to light, resulting in brighter images.

For more detailed information, please refer to: Concepts and principles behind camera parameters.

Regarding the issue you mentioned, increasing the gain effectively increases the pre-amplification of the 2D camera, making it more sensitive to light. So, when you set the exposure time to 55.1ms, the actual image grayscale is much higher than the specified exposure value.

The solution to this problem:
Change the 2D camera exposure mode to automatic exposure, and control the overall image grayscale by setting the desired value. This can effectively solve the problem.

1.1.2 Q (2023/08/12)

Hello, thank you for your response.
Question: Aren’t the depth and 2D images captured independently? If they are captured independently, why can’t the gains for 2D and 3D be controlled separately?
I set the 2D exposure mode to Timed because Timed mode ensures stable and consistent 2D capture time when the light source is consistent.
In my current scenario, with a stable light source, 2D exposure mode is Timed with an exposure time of 55.1 ms, which is sufficient for 2D image capture. However, because the objects being photographed have different colors, I need to adjust the gain for 3D to ensure the imaging effect of depth and point clouds. In this way, the 2D exposure time needs to be adjusted based on the gain for 3D, indirectly affecting the time required for 2D capture.

1.1.2 A (2023/08/14)

Hello,

  1. Regarding the question you raised, we will reconsider it in the future. The current solution to this problem is to set the 2D exposure mode to automatic exposure, which can temporarily address this issue.
  2. The reason you are experiencing this problem is the need for continuous gain adjustments.
    For objects of different colors in the scene, you can solve the issue by setting a single reasonable exposure, which should address imaging problems in all scenarios. If this doesn’t work, you can use two exposure settings without the need to change the gain multiple times to adjust the quality of point cloud imaging.
    You can refer to the following link for point cloud adjustment methods: Camera using knowledge collection.

The camera’s SYS light is displaying a yellow color

1.2 Q (2023/08/18)

  • Software version: 2.1.0
  • Camera model and firmware version: DEEP
  • Environmental conditions on-site: The ambient temperature is relatively high, mostly above 36°C.
  • Main observations: While using the camera, we noticed that the SYS light is flashing yellow, but it doesn’t seem to affect the actual image capture. What could be causing this issue?
    image

1.2 A (2023/08/18)

Yellow SYS light:
A yellow SYS light indicates an alarm state, suggesting that there are some abnormal conditions within the system, but it doesn’t completely hinder the camera’s functionality. You can check the camera’s system logs again to see if there are any error messages. Typically, this is related to temperature and power voltage.

Upon further analysis of this issue: Checking the internal camera logs reveals temperature-related information. The camera’s upper temperature limit should not exceed 55°C. At the moment, the temperature of the laser has reached 48°C, which triggers the alarm. If the current temperature can be maintained without exceeding the limit, the camera should function normally without any problems.

For information on indicator light statuses and related issues, please refer to: Deciphering indicator light status in structured light cameras.



Software

Can camera intrinsic parameters be directly obtained using Mech-Eye Viewer?

2.1 Q (2023/08/15)

Can camera intrinsic parameter files be directly obtained using Mech-Eye Viewer?

2.1 A (2023/08/15)

Hello, with Mech-Eye Viewer, you can directly check the camera intrinsic results, but you cannot directly obtain the intrinsic parameter values through Mech-Eye Viewer.

If you want to obtain the intrinsic parameters, you can do so through the Mech-Eye API, while the camera is connected, to obtain the corresponding intrinsic parameter values.

For the API method of obtaining intrinsic parameters, please refer to the “Mech-Eye Industrial 3D Camera User Manual”: Mech-Eye API.


Python Version Upgrade in Mech-Vision

2.2 Q (2023/08/16)

Hello, currently, the built-in Python version in Mech-Vision 1.7.1 is 3.6.

May I inquire whether future versions of Mech-Vision can be upgraded to Python 3.7 or 3.8?

An ideal scenario would be Mech-Vision could offer several different Python versions for users to choose from, or provide an option to read the locally installed Python environment.

2.2 A (2023/08/17)

Thank you for your feedback, and we will keep pace with the stable versions of Python. Please stay tuned for updates.


Operation guide document for Mech-Vision’s “Operator Interface (Custom)” feature

2.3 Q (2023/08/17)

Do you have an operation guide document for Mech-Vision’s “Operator Interface (Custom)” feature?

2.3 A (2023/08/17)

The production interface in the current version is developed using QML, but its usage is quite complex and leans towards development-oriented. Therefore, there is no official documentation available.

This feature will undergo a significant improvement in the upcoming V1.8.0 version, making it possible to create an Operator Interface without the need for coding. You can look forward to it.


Template

2.4 Q (2023/08/23)

Camera model PRO S, capturing mode EIH, do we need to redo the templates for moving the camera and the robot?
(Software version 1.7.4)

2.4 A (2023/08/23)

Hello, if templates are required for point cloud matching, you don’t need to redo the templates unless you’re changing parts. If the relative position between the camera and the robot changes, you’ll need to recalibrate the extrinsic parameters.


Python function not found in Mech-Vision

2.5 Q (2023/08/25)

Hello, when importing a pre-written Python script in Mech-Vision, sometimes it can’t read the functions within the script. How can this be resolved?

2.5 A (2023/08/27)

Based on your description, it seems like the issue might be because the Python script added a new function, but it was not reloaded in the Step “Calc Results by Python” in Mech-Vision. To learn how to automatically reload it, you can refer to the Usage Instructions section in our online documentation under “Calc Results by Python”.

If the problem still persists, please provide any error messages or relevant screenshots when the issue occurs.


Using two communication methods simultaneously: camera control with robot (AUBO) communication + camera acting as a client with host computer (TCP/IP communication)

2.6.1 Q (2023/08/30)

Question:
While the camera is guiding the robot (AUBO_I5) for picking, it needs to communicate with the host computer through TCP/IP. Currently, the approach is to trigger the camera to take pictures, run, and guide the robot to complete the grabbing action using Mech-Viz. However, the signal control of the gripper needs to be completed by the host computer. Therefore, when the camera is performing the grabbing action, it needs to send signals to the host computer. How can this functionality be implemented?

Software Versions: Mech-Center 1.6.1, Mech-Vision 1.6.2, Mech-Viz 1.6.2, Mech-Eye Viewer 2.1.0
Camera Model and Firmware Version: PRO S 1000
Robot Model: AUBO_I5
Capture Method: ETH
Issue Description: Currently, there is no robot engineer on-site, so the camera must act as the main controller to complete the actions, with the host computer controlling the gripper opening and closing. Can the camera software send gripper open/close signals to the host computer through an Adapter?

Please help!

2.6.1 A (2023/08/30)

  1. You can communicate with the host system through the adapter using Mech-Viz software’s “Notify” and “Branch by Msg” features. When a gripper action is needed, Mech-Viz sends a notification to the host system via the adapter. The host system completes the gripper action and returns a completion signal. The adapter then sets a message branch exit to send the signal to Mech-Viz, allowing the robot to proceed with the next action.

  2. For information on programming the adapter, please refer to: Adapter Programming Guide. If you encounter technical issues while writing the adapter, you can contact Mech-Mind technical support for assistance.

2.6.2 Q (2023/09/14)

I cannot find “Branch by Msg” in version 1.7.4.

2.6.2 A (2023/09/14)

Hello. Click this button in the Workflow section and the Step will be shown.

image image


Point Cloud and Image Fusion

2.7.1 Q (2023/09/01)

I am using the NANO camera and I want to separately obtain images and point clouds. Then, I will process the images and point clouds separately and fuse the processed images and point clouds together (point cloud coloring). I need to use the camera’s intrinsic parameters (which I know how to obtain) and extrinsic parameters (which I don’t know), as well as an API for point cloud and image fusion (which I don’t know). Is there a relevant API for this? Thank you.

2.7.1 A (2023/09/01)

Hello, your requirement is to obtain textured point clouds, and you can use the capturePointXYZBGRMap function in the Mech-Eye API to achieve this. You can refer to: C++ (Windows).

2.7.2 Q (2023/09/01)

If I want to process the image separately and then map it onto the point cloud, is there a relevant API for that?

2.7.2 A (2023/09/01)

The situation you described belongs to the post-processing stage of image processing.

The Nano V3 camera you are using is a monocular camera that can generate both depth maps and 2D texture maps, both of which come from the same camera. Therefore, you can directly map the 2D texture map onto the depth map and obtain textured point clouds using the same pixel positions.

However, when performing this processing, it is essential to ensure that the processed image is accurate, as inaccuracies may lead to misalignment of the point cloud textures.


Mech-Vision project automatically entering the Operator Interface when opened

2.8 Q (2023/09/04)

How to prevent Mech-Vision project from automatically entering the Operator Interface when opened?
As the title suggests, how can I prevent Mech-Vision projects from automatically entering the Operator Interface when opened?
Software Version: V1.7.2

2.8 A (2023/09/05)

Hello, currently, when Mech-Vision software closes a project, it records the project’s state, such as whether the project exited in the Standard Editing Mode or the Operator Interface.

You can try the following options based on your needs:

  1. If you want it to load automatically but don’t want it to enter the Operator Interface, and you don’t plan to use the Operator Interface later: you can simply remove or rename the corresponding project name.qml file within the project.
  2. If you want it to load automatically and enter the Operator Interface at certain times after opening: you need to save project changes in the Standard Editing Mode when closing the project.

Image data cannot be utilized in the Step “Capture Images from Camera” in the virtual mode

2.9 Q (2023/06/15)

Software: Mech-Vision 1.7.1


Data in screenshot 1 can be read normally.


Data in screenshot 2 cannot be used and displays an error message “Invalid Depth Image,” but when using the Step “Read Images”, it functions properly.

2.9 A (2023/06/15)

The second set of data failed to load because there was no corresponding color image for the depth image depth_image_00000.png:

We have synchronized updates in our documentation center in response to this feedback. For more detailed information, please refer to Mech-Mind Documentation:



Capture Images from Camera.



Others

Three-Axis Anti-Collision for Gantry Robots

3.1 Q (2023/08/14)

Some customers are using three-axis systems on-site for workpiece loading and unloading. There have been issues with deformation of the bins and instances of material spillage, prompting the need for collision detection.

Are there any plans for a solution to address three-axis anti-collision in the future?

3.1 A (2023/08/16)

The gantry robot configuration is currently in development and will be introduced in future releases.


Can upgrading the specifications of the IPC’s graphics card improve the execution speed of non-deep learning-related steps in Mech-Vision?

3.2.1 Q (2023/08/17)

Can upgrading the specifications of the industrial computer’s graphics card improve the execution speed of non-deep learning-related steps in Mech-Vision?
Which steps are strongly correlated with graphics card performance in Mech-Vision?

3.2.1 A (2023/08/18)

Hello, you’ve raised a very interesting question.

Based on our testing, it’s a NO because the regular steps hardly use the graphics card for computation.

3.2.2 Q (2023/08/18)

So, can upgrading the CPU specifications improve the execution speed of non-deep learning-related steps? If there is an improvement, approximately how significant is it?

3.2.2 A (2023/08/18)

Hello, there are many factors that affect CPU computing efficiency, and we cannot provide an exact improvement figure. However, based on our testing experience, the speedup of steps is positively correlated with the multi-core performance of the CPU. You can refer to the CPU benchmark scores.


How to determine whether to use edge template matching or full template matching for workpieces?

3.3 Q (2023/08/18)

Is there a detailed explanation of the two matching modes?

3.3 A (2023/08/18)

Regarding the selection of the two matching modes, you can refer to How to Choose the Model Based on 3D Matching Features?.

Additionally, when the target object’s surface features are more uneven (such as crankshafts, rotors, steel rods, etc.), it is recommended to use surface matching and create a point cloud template that reflects the surface features of the object. When the target object is relatively flat and exhibits clear, fixed edge features under the camera (such as panels, tracks, links, brake discs, etc.), it is recommended to use edge matching and create a point cloud template that reflects the object’s edge features.


Visual recognition: determining if there are labels on cartons

3.4 Q (2023/08/23)

When it comes to visual recognition to determine if there are labels on cartons, what would be the best approach?

Mech-Vision: 1.72
Mech-Viz: 1.72
Mech-Eye Viewer: 2.1.0
Camera: Deep
image

3.4 A (2023/08/23)

  1. It is recommended to separately add a 2D camera to the back end for recognition. You can employ deep learning or use grayscale images for segmentation. This will enhance the stability of label paper recognition.

  2. If you are using a Deep camera for visual positioning of the cartons and determining the presence of labels, you can perform recognition after identifying the cartons. Utilize deep learning or grayscale image segmentation for this purpose. A similar project using deep learning is available: Vis-identify_labels.zip (8.5 KB).

  3. When using a Deep camera for visual positioning of cartons to determine label presence, there are some risks involved, such as unstable ambient lighting, the camera’s shooting angle being too high, or the label paper being too small. These factors can reduce the stability of label recognition.


Default username and password

3.5 Q (2023/08/31)

As mentioned, do our IPCs have a default, uniform username and password? Can I use remote assistance to access it?

Also, what does CST stand for? What does it represent?

3.5 A (2023/08/31)

Hello,

  • The standard Mech-Mind IPC comes without a password by default. If you want to use remote desktop, you’ll need to set up a password within the Windows system or create another user. Alternatively, you can use third-party remote desktop software like Teamviewer or Sunlogin.
  • CST is just a code; it doesn’t have a special meaning. You can modify it from within the system.

Step “Invert Poses”

3.6 Q (2023/09/05)

The Step “Invert Poses” can be understood as the spatial relationship between two coordinates. Would it be correct to interpret it this way? When moving from point A to point B in the XYZ coordinates, or when considering Euler angles for rotation, is it safe to assume that this data represents an offset?

In essence, can we think of it as establishing a reference frame with point A and determining the position of point B within this reference frame?

3.6 A (2023/09/05)

Hello, with regard to the Step “Invert Poses”, or what we commonly refer to as this Step, it does not represent the distance between two coordinates.

Typically, we consider the pose of an object within a specific reference frame. For instance, if a camera takes a picture of object A, then the pose of A obtained is in the camera’s reference frame.

When we perform the Step “Invert Poses”, taking the input pose of A and performing the inversion calculation, the result represents the pose of the camera in a reference frame where object A serves as the origin.


Standard interfaces cannot trigger Mech-Viz or Mech-Vision correctly

3.7 Q (2023/08/18)

In the process of using standard interfaces on-site, it is often encountered that the engineering cannot be triggered normally. Sometimes it’s a problem on the robot side, and sometimes it’s a configuration issue on the vision side. Is there a systematic troubleshooting approach and troubleshooting steps?

3.7 A (2023/08/18)

Hello! You can refer to the following checklist for troubleshooting:

  1. Check if the robot controller and software versions are compatible.
  2. Verify if the robot software package meets the requirements.
  3. Ensure that the robot is physically connected to the IPC.
  4. Confirm that the firewall and antivirus software on the IPC are turned off.
  5. Check if the robot’s IP address is set to the same subnet as the IPC.
  6. Verify if there is a successful PING communication between the robot and the IPC.
  7. Ensure that the necessary files for standard interface communication are loaded onto the robot.
  8. Check if the communication initialization program or configuration files on the robot have been updated with the IPC’s IP address and port number.
  9. Verify if the configuration parameters for standard interface communication in Mech-Vision or Mech-Center are set according to the robot model.
    Confirm if the standard interface service in Mech-Vision or Mech-Center has been enabled.