As per reports printed by the University of Washington, a team of UW development psychologists and computer scientists have claimed that robots can learn much like the way babies do. They have demonstrated how robots can be made to learn through imitating humans just like babies naturally do observing their elders.
As seen till now, human children learn things gradually by observing and imitating what adults do and by exploring whatever comes to their reach naturally like toys. But until now robots have always been taught things by either writing a code for different tasks or by manually moving their arm or any other part to make them see how it works.
But according to this new study by the University of Washington, robots can learn through exploring, observing a human perform the task and then coming up with a way of their own to do the same task on their own. A UW professor Rajesh Rao who belongs to computer science and engineering department said that this can prove to be a stepping stone in building robots that can learn from humans like infants do. This includes teaching a robot how to wash dishes, fold clothes and do other tasks.
In November, this research was published in the journal PLOS ONE. The research looks at child development research and machine learning approaches together. The credit goes to UW’s Institute for Learning & Brain Sciences Lab (I-LABS).
The team from the University of Washington has come up with a probabilistic model which can be the future of robotics in the years to come. UW’s psychology professor and I-LABS co-director Andrew Meltzoff says that infants are capable of deducing goals of an adult’s actions and thereby developing their own way to achieve those goals.
According to Meltzoff, children tend to learn quickly because of their playful nature and self-exploration business, which helps them to learn how their actions affect different objects/toys. And they use the knowledge they have gained to work on the next toys or stuff that get.
Roboticists plan to apply the same research to develop new learning approaches for robots so that a robot can also explore and learn how its own actions affect the scene.
Rao’s team tested this probabilistic model with two computer simulated experiments. In the first one, a robot learns to follow a human’s gaze by assuming that its head movements are governed in the same way as that of the human. It observes the human looking across the room and then tries to figure out where that human is looking.
In another experiment, an actual robot learns to imitate human actions like picking and moving toy food objects to different spots over a table. During the experiment, the robot sometimes used a different path to move the object while keeping the beginning and ending points similar to that by the human.
These initial experiments have been based on simple tasks but the team plans to explore this model more so as to make robots learn much complicated tasks. The focus is that – robots should understand the goal of a human’s action and then use its own way to achieve it. For example, if a human pushes a heavy object from point A to B, the robot could infer the goal and simply pick up the heavy object and place it on point B.
This research by the team of University of Washington was funded by the Office of Naval Research, National Science Foundation and Intel. UW’s psychology professor and I-LABS co-director Andrew Meltzoff considers babies to be the best learners on earth and questions why not to develop robots that learn as effortlessly as a child? Perhaps, that’s the future of robotics!
The research was funded by the Office of Naval Research, National Science Foundation and Intel.
University of Washington
Article By: Davinder