Researchers use machine learning to teach robots how to trek through unknown terrains

A team of Australian researchers has designed a reliable strategy for testing physical abilities of humanoid robots - robots that resemble the human body shape in their build and design. Using a blend of machine learning methods and algorithms, the research team succeeded in enabling test robots to effectively react to unknown changes in the simulated environment, improving their odds of functioning in the real world.

The findings, which were published in a joint publication of the IEEE and the Chinese Association of Automation Journal of Automatica Sinica in July, have promising implications in the broad use of humanoid robots in fields such as healthcare, education, disaster response and entertainment.

"Humanoid robots have the ability to move around in many ways and thereby imitate human motions to complete complex tasks. In order to be able to do that, their stability is essential, especially under dynamic and unpredictable conditions," said corresponding author Dacheng Tao, Professor and ARC Laureate Fellow in the School of Computer Science and the Faculty of Engineering at the University of Sydney.

"We have designed a method that reliably teaches humanoid robots to be able to perform these tasks," added Tao, who is also the Inaugural Director of the UBTECH Sydney Artificial Intelligence Centre.

Humanoid robots are robots that resemble humans' physical attributes - the head, a torso, and two arms and feet - and possess the capability to communicate with humans and other robots. Equipped with sensors and other input devices, these robots also perform limited activities according to the outside input.

They are typically pre-programmed for specific activities and rely on two kinds of learning methods: model-based and model-free. The former teaches a robot a set of models that it can use to behave in a scenario, while the latter does not. While both learning methods have been successful to a certain extent, each paradigm alone has not proven sufficient to equip a humanoid robot to behave in a real-world scenario where the environment changes constantly and often unpredictably.

To overcome this, Tao and his team introduced a new learning structure that incorporates parts of both model-based and model-free learning to balance a biped, or two-legged, robot. The proposed control method bridges the gap between the two learning paradigms, where the transition from learning the model to learning the actual procedure has been smoothly completed. Simulation results show that the proposed algorithm is able to stabilize the robot on a moving platform under unknown rotations. As such, these methods demonstrate that the robots are able to adapt to different unpredictable situations accordingly and can thus be applied to robots outside of the laboratory environment.

In the future, the researchers hope to validate their method under more complex environments with more unpredictable and changing variables and with varying dimensions as they test the robots' abilities to exert full body control.

"Our ultimate goal will be to see how our method enables the robot to have control over its entire body as it is exposed to unmeasurable and unexpected disturbances such as a changing terrain. We would also like to see the robot's ability to learn how to imitate human motion, such as ankle joint movement, without having been given prior information."

Credit: 
Chinese Association of Automation