Robotic Toddler “Imagines” How To Stand Before Doing It

|

One small step for robots, one giant leap for our robotic overlords. Today we’re seeing a robotic toddler that used brain-inspired algorithms to “imagine” performing tasks before trying them in a real world situation – instead of receiving programming as robots traditionally do.

Darwin the robot mimicked actual toddlers in that it was unsteady on its feet. The humanoid robot, despite its clumsy motions, demonstrated a new way for robots to deal with the challenges of unfamiliar environments. Like toddlers, the robot is learning to perform a new task in an unfamiliar environment much the same way a toddler uses neurological processes as they explore challenging areas.

Darwin hails from the lab of associate professor for the University of California in Berkeley  Pieter Abbeel. Its motions are controlled by a few simulated neural networks which are algorithms that mimic the way learning happens in that of a biological brain. While some neurons strengthen, others weaken over time based on input. Researchers have compared how the robot processes its movements as an imaginary process.

For example, when the robot attempts to stand and twist its body, it performs some simulations before hand to train its high-level deep-learning network in how to actually perform said task. A second deep-learning network is then trained to carry out the task while responding to various dynamics including the joints of the robot and the environment it’s in — all this based on guidance provided by the first simulations.

RobotToddlerFI

According to postdoctoral researcher at UC Berkeley Igor Mordatch, the researchers had the robot learn how to stand, move its hand in reaching motions and stay upright as the ground beneath it moves and tilts. As he said:

“It practices in simulation for about an hour. Then at runtime it’s learning on the fly how not to slip.”

The new approach can help robots to figure out problems in situations where they don’t have time for an extensive period of trial and error testing, while also accounting for real world variables not always seen in simulations. According to Dieter Fox, a professor in the University of Washington’s computer science and engineering department — and a specialist in robot perception and control — there’s huge potential for neural networking in robotics.

“I’m very excited about this whole research direction,” he said. “The problem is always if you want to act in the real world. Models are imperfect. Where machine learning, and especially deep learning comes in, is learning from the real-world interactions of the system.”

This is a big step forward for robotics, considering they could be deployed in many real world situations and have the ability to deal with unfamiliar issues and situations on the fly. All hail our robotic overlords.

[button link=”http://www.technologyreview.com/news/542921/robot-toddler-learns-to-stand-by-imagining-how-to-do-it/” icon=”fa-external-link” side=”left” target=”blank” color=”285b5e” textcolor=”ffffff”]Source: MIT Technology Review[/button]

Last Updated on November 27, 2018.

Previous

View-Master Review: A Classic Toy Reimagined With Virtual Reality

BlizzCon 2015: What Does Blizzard Have In Store For Gamers This Time?

Next

Latest Articles

Share via
Copy link
Powered by Social Snap