Robot asks for human help

Willow Garage, a robotics company in Palo Alto California, have created a robot that asks for human help for the objects that it can not recognize by itself. As far as developing the AI, it may look like getting around the real problem a little, instead of solving it, however as far as the end result it seems that it will function in a pretty satisfactory manner. If you also consider the fact that the robot will do so by connecting with Amazon’s “Mechanical Turk” database, where thousands of human users efficiently work together on an online platform to do things which are very easy for a human but might be very hard for a computer to do, it becomes clear that this will be a very efficient process.

Add to that fact that unlike humans, robots or computers can just copy and paste gained information between each other, the functionality and efficiency of the process increases further. So for instance, a new piece knowledge gained by a robot will be transferable to others, and there will be a huge amount of knowledge uploaded on robots brains in a very short period of time, (or available to them online whichever way the system is following.) This will help to have a large number of robots operating autonomously.

The main problem with a robot recognizing an object is, that when the variables such as the angle of object in three dimension or lighting conditions change, robots have difficulty to recognize objects, where humans can do this effortlessly. However we also want to note here that, in the recent years, there is a considerable development in the field of AI, as far as the object recognition technology. And the technology is constantly improving, which promises to become as competent as a human eye in the very near future. Add to the fact that the accelerated development trend of information technologies, and in several years we will be able to see software that will be able to recognize objects as good as humans. We think that is a major milestone in creating the real functional AI. Think for a second, when you hear a sentence for instance. Whether the words you heard are material or abstract concepts, your brain immediately brings pictures and then connects them, and that is how it produces a meaning from what you just heard or saw. Recognizing objects correctly therefore is a huge step towards a real functional AI.

Of course there is much more in human visual processing than simply recognizing individual static objects, for instance when you think about yourself in a moving car and looking at the surrounding streets, other moving cars, and far ahead views, your brain processes a huge amount of information than any computer is able to do today, (add to that fact that you also are able to think about things about your surroundings that you do not necessarily see in a particular instance but you still know about them in your mind) but once the AI technology solves the object recognition problem, these steps will follow.

Some of these other problems have already been tackled anyway, such as the autonomous driving vehicles now able to complete miles of race tracks in competitions at the same time obeying traffic rules, such as in DARPA grand challenges. Just several years ago, even the winning vehicle was not able to complete the race after a few miles, even without traffic rules.

See the source article about the company’s robot here.