No it doesn’t look like a chef or a mom trying to whip some food for guests or the family. It’s a real robot made of anything except of flesh and bones. Government researchers came up with the right mathematical formulas so advanced it doesn’t need to use recipe books when cooking but instead it copies what’s on the video showing. Pretty impressive eh? Wait till you eat the food it cooks, well that’s the part I’m not very sure about.
Yes, the food isn’t important. The motive of the Defense Advanced Research Projects Agency isn’t about putting up restaurants but to develop a mathematical language to empower advance sensors with the skill to differentiate which the things it can use and which it cannot.
The funding was issued under DARPA’s Mathematics of Sensing, Exploitation, and Execution project.
“The MSEE program initially focused on sensing, which involves perception and understanding of what’s happening in a visual scene, not simply recognizing and identifying objects,” says program manager Reza Ghanadan in the agency’s Defense Sciences Offices.
One of the earliest success of the research was when the robot performed how to use kitchen utensils by watching YouTube cooking videos done by humans.
Electronic neural systems were used so that one can remember things and the other the capability of coming up with mathematical representation of the movements which the robot will try to imitate.
It has cameras which allow it to watch what a person is doing, like picking up a bowl of cream then pouring the cream, and then breakdown the activity into thousands of different photos of the arms, hands, bowls, and the cream. The resulting mathematical model identifies the appearance of the cream in the vessel it’s being poured into as a goal the robot can imitate, the researchers say.
“We are trying to create a technology so that robots eventually can interact with humans,” says researcher Cornelia Fermüller from the university’s Institute for Advanced Computer Studies.
“Robots need to understand what humans are doing. For that, we need tools so that the robots can pick up a human’s actions and track them in real time,” she says.
“How is an action performed by humans? How is it perceived by humans? What are the cognitive processes behind it?”
Without using more programming and sans of human help, the robots from Maryland were able to imitate the assignment that were shown in the YouTube programming.
“Others have tried to copy the movements. Instead, we try to copy the goals. This is the breakthrough,” says researcher leader Yiannis Aloimonos. “We chose cooking videos because everyone has done it and understands it. But cooking is complex in terms of manipulation, the steps involved and the tools you use.”
“The research is a significant step in robotics development,” says Ghanadan.
“Instead of the long and expensive process of programming code to teach robots to do tasks, this research opens the potential for robots to learn much faster, at much lower cost and, to the extent they are authorized to do so, share that knowledge with other robots,” he says.