Will One Small Step for AI Be One Giant Leap for Robotics?

Robot learns to walk by itself using artificial intelligence.

Posted Apr 24, 2019

ergoneon/Pixabay
Source: ergoneon/Pixabay

Have you ever wondered how human-like a robot can become? Researchers are one step closer, literally, to machines having more human-like capabilities. A cross-disciplinary research team from the University of Southern California (USC) departments of engineering (biomedical, electrical, aerospace and mechanical), computer science, biokinesiology, and physical therapy joined forces to create a robot that can teach itself to walk.

The USC team of Ali Marjaninejad, Darío Urbina-Meléndez, Brian A. Cohn and Francisco J. Valero-Cuevas published their findings recently in Nature Machine Intelligence on March 11, 2019.

The researchers created a “biologically plausible algorithm” called “G2P” (general to particular). The algorithm was created in two distinct phases—learning and refinement.

Initially, in the learning phase, a tendon-driving robotic limb undergoes a motor babbling phase where the system attempts random control sequences and gathers the associated kinematics. The input-output data from the motor babbling is fed to a multi-layer perceptron artificial neural network (ANN) to train it. In turn, the trained ANN produces an initial output-input (inverse) map based on the system’s dynamics.

The ANN of the inverse map from 6D kinematics to a 3D motor control sequences has three layers and twenty-four nodes total. There are six nodes in the input layer, fifteen nodes in the hidden layer, and three nodes in the output layer.

The hyperbolic tangent sigmoid transfer function was used to compute a layer’s output from its net input—well suited for neural networks when velocity is a priority over the precise shape of the transfer function. Scaling was used for the output layer.

The ANN was trained using Levenberg–Marquardt backpropagation. Levenberg–Marquardt algorithms are used to solve least squares problems. These algorithms are a combination of Gauss-Newton and gradient descent: With each iteration, the size of the value of the algorithmic parameter λ determines which is used. Levenberg–Marquardt is able to process models with multiple free parameters and is faster to converge than standalone Gauss-Newton or gradient descent methods.

Nguyen–Widrow initialization algorithm, which values contain a degree of randomness, was used for determining the ANN’s weights and biases.

The next phase refines the initial learning and consists of two parts—exploration and converging toward high reward. Exploring random attempts over time will result in solutions with a treadmill reward. Then, the behavior is reinforced with a reward to refine the inverse map. The ANN’s weights are adjusted between attempts to enable the system to learn from experience.

The results are the G2P algorithm is able to learn how to propel a treadmill autonomously—without explicit modeling of the dynamics or closed-loop error sensing. In effect, the researchers created robots that learn from experience and experimentation, rather than explicit instructions or prior simulation guidance.

The researchers wrote that their creation “may lead to a class of robots with unique advantages in terms of design, versatility, and performance,” and “contributes to computational neuroscience by providing a biologically and developmentally tenable learning strategy for anatomically plausible limbs.”

Copyright © 2019 Cami Rosso All rights reserved.

References

Marjaninejad, Ali, Urbina-Meléndez, Darío, Cohn, Brian A., J. Valero-Cuevas, Francisco. “Autonomous functional movements in a tendon-driven limb via limited experience.” Nature Machine Intelligence. March 11, 2019.

MathWorks. “tansig.” Retrieved 4-24-2019 from https://www.mathworks.com/help/deeplearning/ref/tansig.html

Statistics How To. “Levenberg–Marquardt Algorithm (Damped Least Squares): Definition.” Retrieved 4-24-2019 from https://www.statisticshowto.datasciencecentral.com/levenberg-marquardt-algorithm/

MathWorks. “initnw.” Retrieved 4-24-2019 from https://www.mathworks.com/help/deeplearning/ref/initnw.html