I found out that the point cloud information is different when the bearing is not stacked up and when the bearing is stacked up. Currently, I use the Instance Segmentation DL model for the bearing.
Colour Image Result:
Point Cloud Result with Pose:
From the point cloud result, without stacking one shows the correct point cloud but with stacking one shows the incorrect point cloud which makes the Z pick pose is slanted towards another direction. Is there any way that can resolve this issue? Should I create a model for this bearing in Mech Vision for 3D matching as well instead of only using DL model?
Yes, I recommend you use 3D matching after using deep learning. Instance segmentation can be used to segment each individual object. However, the mask generated is based on 2D information and not segment point clouds with overlapping situations. Hence, performing 3D matching on the point cloud results is necessary.
Thanks for the information. If I don’t want to use 3D matching, is there any other method to overcome, for example, using the bounding box value from the Deep Learning output of Mech Vision?
If you can attempt to preprocess the point cloud of the bearing’s side to make it relatively clean and have clear boundaries between each others, then using deep learning followed by the “Calc Poses and Dimensions from Planar Point Clouds” step might work. However, this approach demands stringent requirements for point cloud preprocessing.
The AI Assistant is now available for staff.
Come and try it out!
AI Bot
Mech-Mind AI Assistant beta
Hi, I am your AI assistant
I am here to assist with your technical questions about Mech-Mind.