Robot Leo’s first steps

Robot Leo was programmed to teach himself to walk. It took him about five hours, not counting the time he needed to get up from his falls.


If robots really are to migrate from neat factory floors into our homes in order to help in the household or perform care-giving tasks, they need to be able to adapt to the untidy and changing surroundings that human beings call home. A first step towards this distant goal was made by Dr. Erik Schuitema (3mE), who programmed a small 50-centimetre robot named Leo to teach itself how to walk in circles on a flat floor without obstacles.

A film he made shows Leo on something resembling a treadmill: at the end of two long rods that were attached to a pin in the middle of the floor. Sometimes Schuitema would start up the process and watch Leo rehearsing his steps tirelessly. Tchick, tchack, tchick, tchack, BOOM. All evening long.



“Watching a humanoid robot learn its first footsteps without human intervention is truly exciting”, says Schuitema on his STW-funded project “Its steady progress with the occasional falls looks exemplary for the long but promising research path that lies ahead.”

The principle that Schuitema applied is called Reinforcement Learning (RL). It works with a happiness budget that the robot strives to maximize on the long run. The robot gets plus points (rewards) for every good step made, but minus points (punishments) for wasting time or energy. Or for falling.

Although detrimental to the robot, the penalty on falling shouldn’t be put too high, Schuitema warns. “’Because if you do, the robot will become fearfull. It will prefer to stand still in stead of taking chances and risking a fall.” It’s sometimes hard not to be anthropomorphic, he admits.

For its two-dimensional walk (the robot is stabilised sidewards) Leo has seven degrees of freedom and engines, of which three are trained in walking: both hips and the knee of the swinging leg.

Other RL-researchers have experimented mainly with computer simulations. But using a physical model makes the experiments more realistic and applied. It not only forces you to come up with robot designs that are more or less fall-proof, but it also made Schuitema realise that part of the motor intelligence in the form of sensors and actuators is best put in the hardware.

Although limited in challenge (2D-walking on a flat surface without the need for environmental sensing), the gait that Leo has autonomously developed is endearing. One can’t help feeling sympathy and admiration for its dedication and that of its creators.

→ Dr. Erik Schuitema, Reinforcement Learning on autonomous humanoid robots, 12 November 2012, PhD supervisors Prof. Pieter Jonker and Prof. Robert Babuška (3mE).


Redacteur Redactie

Heb je een vraag of opmerking over dit artikel?


Comments are closed.