Robotics 2.0: What previously unattainable tasks can AI robot perform?

The biggest change that AI brings to robotic arms is: In the past, robotic arms could only repeatedly perform the engineer’s writing process. Despite their accuracy and precision, they could not cope with environmental or processual changes.

Thanks to AI, machines can now learn to handle a wide range of objects and tasks on their own. Specifically, AI robots have achieved major breakthroughs in three major areas compared to traditional robotic arms:

  1. Vision System

Even the most advanced 3D industrial cameras do not possess the accuracy of the human eye in determining depth and distance as well as in identifying transparent packaging, reflective surfaces, or deformable objects.

This explains why it is difficult to find a camera that can provide accurate depth and identify most packages and items. However, due to AI, this situation will soon be changed.

Machine vision has made tremendous progress in the past few years, with innovations from deep learning, semantic segmentation, and scene understanding.

These have improved the depth and image recognition using commodity cameras, allowing manufacturers to obtain accurate image information and to successfully identify transparent or reflective object packaging without the need for expensive cameras.

Deep learning object recognition examples, from left to right: Mask, Object Modeling, Grasp Point Prediction — source: OSARO

2. Scalability

Unlike traditional machine vision, deep learning does not require pre-registration or construction of a 3D CAD model of each item. The artificial neural network can automatically identify the object in the image after training.

Unsupervised or self-supervised learning can also be used to reduce the need to manually tag data or features, allowing the machine to more closely resemble human learning.

ML reduces the need for human intervention and enables the robot to handle new parts without the need for engineers to rewrite the program. As the machine gathers more and more data through its operation, the accuracy of the machine learning model will also be further enhanced.

Currently, a typical production line usually has surrounding equipment such as a shaker table, a feeder, and a conveyor belt, to ensure that the robot can take the required components accurately.

If machine learning is further developed and the robotic arm becomes even smarter, perhaps one day these peripherals, more than four or five times more expensive than the robotic arm, will no longer be needed.

On the other hand, because deep learning models are generally stored in the cloud, this also allows robots to learn from each other and share knowledge. For example, if a robotic arm learns to combine two parts overnight, it can then update the new model to the cloud and share it with other robots. This saves the learning time of other machines and also ensures consistency of quality.

3. Intelligent Placement

Some instructions that seem easy for us such as carefully handling or neatly arranging items, represent a huge technical challenge for the robot. How is “careful handling” defined? Is it to immediately stop applying force when the object touches the tabletop? Or is it moving the object to 6 cm away from the table then letting it fall naturally? Or is it gradually reducing speed as you approach the tabletop? How do these different definitions affect the speed and accuracy of item placement?

Arranging items neatly is even more difficult. Even if we ignore the definition of “neat,” we must first pick up the item from the correct position to accurately place the item at the desired position and angle: The robotic arm is still not as dexterous as a human being’s, and most of the current robotic arms still use suction cups. There is still plenty of room for improvement in terms of achieving fine motor skills like that in human joints and fingers.

 

Secondly, we need to be able to instantly determine the angular position and shape of the object to be gripped. Taking the cup in the above figure as an example. The robotic arm needs to know: Whether the cup is facing up or down? Whether it should be placed sideways or upright? And whether there are other items or obstacles in the way? That way the robot can determine where to place the cup to make the most efficient use of space.

We are constantly learning the various tasks of picking up and putting down items from birth. These complicated tasks can be completed instinctively. However, the machine does not have such experience and must re-learn tasks.

Leveraging AI, the robotic arm can now judge depth more accurately. It can also learn through training and determine if a cup is facing upwards or downwards or is in some other state.

Object modeling or voxelization can be used to predict and reconstruct 3D objects. They enable the machine to more accurately render the size and shape of the actual item and place the item in the required position more accurately.

Laisser un commentaire