Westworld, a science-fiction film from 1973, features robots acting as hosts in a prestigious holiday resort. The robots are indistinguishable from humans, except in one category: the robotic hands are not yet up to par when compared to human hands.
Hollywood and science do not always agree, but in this case, both agreed that robot hands are simply behind when compared to human hands regarding versatility and functionality. This may sound surprising to some, especially considering that we all know that robots are deployed on assembly lines.
However, assembly lines are drastically different from a real-world setting. In such an environment, a robot’s hands perform a specific function and nothing else. Thus, it’s a lot easier to design a hand that fits a purpose.
In the real world, human hands work in a very complicated fashion. Although we do it automatically, our brains do staggering amounts of calculations when manipulating our hands. The versatility of the human hand is something that a robot can’t still do on its own. But, robot engineers are not thwarted by the complexity. Now, Artificial Intelligence (AI) is being incorporated to help the machines “learn” how to manipulate a robotic hand.
The Art of Grasping
When we pick-up an item using our hands, our brains are doing two primary functions – object identification and estimating grip strength.
Robots are already capable when it comes to item identification. In most cases, robots of today are equipped with an RGB-D camera.
The RGB-D camera is an excellent camera for robotics as it excels in assessing depth and color.
The other challenge is the grip strength, and it’s by far a bottleneck in robotics development. The old approach is coding the correct strength in relation to the item being gripped. However, the main problem with this solution is the sheer volume.
The real world consists of countless everyday items. Coding the right grip strength for each item is a painstakingly slow process. This is where Artificial Intelligence comes into play.
Nowadays, Covariant.ai, a startup in California, is incorporating Machine Learning to speed up the process significantly.
Rather than getting programmers to teach a robot to do a specific action, humans do a demonstration, and it’s up to the robot how to learn so it can mimic the same action. This allows the machine to learn faster while enabling it to adapt to a real-world situation.
The Softness Factor
When looking at images of most robot hands, it’s typical to see that they are mostly made of metal. However, it seems that having soft hands is advantageous. Human hands are relatively soft. This soft surface allows the human hand to “curl” into an object, allowing more surface area for gripping. However, giving robot soft hands is not a simple process.
Since sensors are typically made from hard objects, incorporating them into a soft material is a big challenge. Furthermore, the softer the material is, the more you will lose in terms of “visibility.”
Many robotics experts think that it’s unlikely that a robot hand can be developed to attain a universal solution for gripping and grasping. However, it’s unlikely that robotic engineers will stop looking for one.