Note: This content is accessible to all versions of every browser. However, this browser does not seem to support current Web standards, preventing the display of our site's design details.

  

Statistical Machine Learning for Autonomous Systems and Robots

Back
Abstract:
Statistical machine learning has been a promising direction in control and robotics for more than a decade since learning models and controllers from data allows us to reduce the amount of engineering knowledge that is otherwise required. In real systems, such as robots, many experiments, which are often required for machine learning and reinforcement learning methods, can be impractical and time consuming. To address this problem, current learning approaches typically require task-specific knowledge in form of expert demonstrations, pre-shaped policies, or the underlying dynamics.

In the first part of the talk, I follow a different approach and speed up learning by efficiently extracting information from sparse data. In particular, I propose to learn a probabilistic, non-parametric Gaussian process dynamics model. By explicitly incorporating model uncertainty in long-term planning and controller learning my approach reduces the effects of model errors, a key problem in model-based learning. Compared to state-of-the art reinforcement learning my model-based policy search method achieves an unprecedented speed of learning. I demonstrate its applicability to autonomous learning from scratch in real robot and control tasks.

In the second part of my talk, I will discuss an alternative method for learning controllers for bipedal locomotion based on Bayesian Optimization, where it is hard to learn models of the underlying dynamics due to ground contacts. Using Bayesian optimization, we sidestep this modeling issue and directly optimize the controller parameters without the need of modeling the robot's dynamics.

If time permits, in the third part of my talk, I will discuss state estimation in dynamical systems (filtering and smoothing) from a machine learning perspective. I will present a unifying view on Bayesian latent-state estimation, which allows both to re-derive common filters (e.g., the Kalman filter) and devise novel smoothing algorithms in dynamical systems. I will demonstrate the applicability of this approach to intention inference in robot table tennis.

Type of Seminar:
Optimization and Applications Seminar
Speaker:
Dr. Marc Deisenroth
Imperial College London
Date/Time:
May 18, 2015   16:30
Location:

HG G 19.1, Rämistrasse 101
Contact Person:

Prof. Lygeros
File Download:

Request a copy of this publication.
Biographical Sketch:
Dr Marc Deisenroth is an Imperial College Junior Research Fellow and head of the Statistical Machine Learning Group in the Department of Computing at Imperial College London (UK). From December 2011 to August 2013 he was a Senior Research Scientist at TU Darmstadt (Germany). From February 2010 to December 2011, he was a full-time Research Associate at the University of Washington (Seattle). He completed his PhD at the Karlsruhe Institute for Technology (Germany). Marc conducted his PhD research at the Max Planck Institute for Biological Cybernetics (2006-2007) and at the University of Cambridge (2007-2009). Marc was Program Chair of the "European Workshop on Reinforcement Learning" (EWRL) in 2012 and Workshops Chair of "Robotics: Science & Systems" (RSS) in 2013. His interdisciplinary research expertise centers around machine learning, control, robotics, and signal processing.